Dec 09 08:43:52 crc systemd[1]: Starting Kubernetes Kubelet... Dec 09 08:43:53 crc restorecon[4680]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 09 08:43:53 crc restorecon[4680]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 09 08:43:54 crc kubenswrapper[4786]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 08:43:54 crc kubenswrapper[4786]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 09 08:43:54 crc kubenswrapper[4786]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 08:43:54 crc kubenswrapper[4786]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 08:43:54 crc kubenswrapper[4786]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 09 08:43:54 crc kubenswrapper[4786]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.829634 4786 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.833941 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.833961 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.833975 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.833980 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.833985 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.833989 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.833994 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.833998 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834003 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834007 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834011 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834015 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834019 4786 feature_gate.go:330] unrecognized feature gate: Example Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834022 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834026 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834030 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834033 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834036 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834041 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834045 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834048 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834052 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834056 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834060 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834063 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834067 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834071 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834074 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834079 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834082 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834086 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834091 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834095 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834100 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834106 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834114 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834120 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834127 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834132 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834138 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834143 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834147 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834151 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834155 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834160 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834163 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834167 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834170 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834175 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834179 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834182 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834186 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834190 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834193 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834196 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834200 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834203 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834209 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834213 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834219 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834223 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834229 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834233 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834237 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834242 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834246 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834249 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834254 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834257 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834261 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.834264 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834554 4786 flags.go:64] FLAG: --address="0.0.0.0" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834569 4786 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834576 4786 flags.go:64] FLAG: --anonymous-auth="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834581 4786 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834587 4786 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834591 4786 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834597 4786 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834602 4786 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834607 4786 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834622 4786 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834631 4786 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834635 4786 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834640 4786 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834644 4786 flags.go:64] FLAG: --cgroup-root="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834648 4786 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834652 4786 flags.go:64] FLAG: --client-ca-file="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834656 4786 flags.go:64] FLAG: --cloud-config="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834660 4786 flags.go:64] FLAG: --cloud-provider="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834664 4786 flags.go:64] FLAG: --cluster-dns="[]" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834671 4786 flags.go:64] FLAG: --cluster-domain="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834675 4786 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834679 4786 flags.go:64] FLAG: --config-dir="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834683 4786 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834688 4786 flags.go:64] FLAG: --container-log-max-files="5" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834693 4786 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834697 4786 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834702 4786 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834706 4786 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834711 4786 flags.go:64] FLAG: --contention-profiling="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834716 4786 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834720 4786 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834725 4786 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834729 4786 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834735 4786 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834739 4786 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834743 4786 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834747 4786 flags.go:64] FLAG: --enable-load-reader="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834751 4786 flags.go:64] FLAG: --enable-server="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834755 4786 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834761 4786 flags.go:64] FLAG: --event-burst="100" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834765 4786 flags.go:64] FLAG: --event-qps="50" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834769 4786 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834774 4786 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834778 4786 flags.go:64] FLAG: --eviction-hard="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834783 4786 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834787 4786 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834791 4786 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834795 4786 flags.go:64] FLAG: --eviction-soft="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834799 4786 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834803 4786 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834831 4786 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834851 4786 flags.go:64] FLAG: --experimental-mounter-path="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834871 4786 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834875 4786 flags.go:64] FLAG: --fail-swap-on="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834879 4786 flags.go:64] FLAG: --feature-gates="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834884 4786 flags.go:64] FLAG: --file-check-frequency="20s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834890 4786 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834894 4786 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834903 4786 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834907 4786 flags.go:64] FLAG: --healthz-port="10248" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834912 4786 flags.go:64] FLAG: --help="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834916 4786 flags.go:64] FLAG: --hostname-override="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834920 4786 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834924 4786 flags.go:64] FLAG: --http-check-frequency="20s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834928 4786 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834933 4786 flags.go:64] FLAG: --image-credential-provider-config="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834938 4786 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834942 4786 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834946 4786 flags.go:64] FLAG: --image-service-endpoint="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834950 4786 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834954 4786 flags.go:64] FLAG: --kube-api-burst="100" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834958 4786 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834962 4786 flags.go:64] FLAG: --kube-api-qps="50" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834967 4786 flags.go:64] FLAG: --kube-reserved="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834971 4786 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834975 4786 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834979 4786 flags.go:64] FLAG: --kubelet-cgroups="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834983 4786 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834987 4786 flags.go:64] FLAG: --lock-file="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834991 4786 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834995 4786 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.834999 4786 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835006 4786 flags.go:64] FLAG: --log-json-split-stream="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835010 4786 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835014 4786 flags.go:64] FLAG: --log-text-split-stream="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835018 4786 flags.go:64] FLAG: --logging-format="text" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835022 4786 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835027 4786 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835031 4786 flags.go:64] FLAG: --manifest-url="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835035 4786 flags.go:64] FLAG: --manifest-url-header="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835040 4786 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835044 4786 flags.go:64] FLAG: --max-open-files="1000000" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835049 4786 flags.go:64] FLAG: --max-pods="110" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835053 4786 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835057 4786 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835061 4786 flags.go:64] FLAG: --memory-manager-policy="None" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835065 4786 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835070 4786 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835073 4786 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835078 4786 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835087 4786 flags.go:64] FLAG: --node-status-max-images="50" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835099 4786 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835103 4786 flags.go:64] FLAG: --oom-score-adj="-999" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835107 4786 flags.go:64] FLAG: --pod-cidr="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835111 4786 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835118 4786 flags.go:64] FLAG: --pod-manifest-path="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835123 4786 flags.go:64] FLAG: --pod-max-pids="-1" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835127 4786 flags.go:64] FLAG: --pods-per-core="0" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835131 4786 flags.go:64] FLAG: --port="10250" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835136 4786 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835141 4786 flags.go:64] FLAG: --provider-id="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835146 4786 flags.go:64] FLAG: --qos-reserved="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835152 4786 flags.go:64] FLAG: --read-only-port="10255" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835157 4786 flags.go:64] FLAG: --register-node="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835162 4786 flags.go:64] FLAG: --register-schedulable="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835168 4786 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835184 4786 flags.go:64] FLAG: --registry-burst="10" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835190 4786 flags.go:64] FLAG: --registry-qps="5" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835195 4786 flags.go:64] FLAG: --reserved-cpus="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835200 4786 flags.go:64] FLAG: --reserved-memory="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835206 4786 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835211 4786 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835215 4786 flags.go:64] FLAG: --rotate-certificates="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835219 4786 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835223 4786 flags.go:64] FLAG: --runonce="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835227 4786 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835231 4786 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835235 4786 flags.go:64] FLAG: --seccomp-default="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835239 4786 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835243 4786 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835248 4786 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835253 4786 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835257 4786 flags.go:64] FLAG: --storage-driver-password="root" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835261 4786 flags.go:64] FLAG: --storage-driver-secure="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835265 4786 flags.go:64] FLAG: --storage-driver-table="stats" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835269 4786 flags.go:64] FLAG: --storage-driver-user="root" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835273 4786 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835278 4786 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835282 4786 flags.go:64] FLAG: --system-cgroups="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835286 4786 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835292 4786 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835296 4786 flags.go:64] FLAG: --tls-cert-file="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835300 4786 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835306 4786 flags.go:64] FLAG: --tls-min-version="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835310 4786 flags.go:64] FLAG: --tls-private-key-file="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835314 4786 flags.go:64] FLAG: --topology-manager-policy="none" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835317 4786 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835358 4786 flags.go:64] FLAG: --topology-manager-scope="container" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835362 4786 flags.go:64] FLAG: --v="2" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835383 4786 flags.go:64] FLAG: --version="false" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835388 4786 flags.go:64] FLAG: --vmodule="" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835393 4786 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835398 4786 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835507 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835515 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835546 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835553 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835576 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835581 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835589 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835593 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835597 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835601 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835605 4786 feature_gate.go:330] unrecognized feature gate: Example Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835609 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835613 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835617 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835621 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835625 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835629 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835633 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835637 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835640 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835645 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835648 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835652 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835655 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835659 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835662 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835666 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835670 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835675 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835679 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835683 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835687 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835690 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835694 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835697 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835701 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835705 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835708 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835713 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835717 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835720 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835725 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835729 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835733 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835736 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835740 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835744 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835747 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835751 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835754 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835758 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835762 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835765 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835770 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835775 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835778 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835783 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835786 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835790 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835794 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835798 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835801 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835805 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835808 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835812 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835815 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835818 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835822 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835826 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835831 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.835838 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.835859 4786 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.846275 4786 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.846315 4786 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846381 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846388 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846393 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846397 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846402 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846407 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846411 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846416 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846434 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846438 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846442 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846446 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846449 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846453 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846457 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846461 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846464 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846469 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846472 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846476 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846480 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846484 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846488 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846491 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846496 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846500 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846503 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846507 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846511 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846515 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846519 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846522 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846526 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846529 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846534 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846541 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846546 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846550 4786 feature_gate.go:330] unrecognized feature gate: Example Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846556 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846560 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846565 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846569 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846573 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846577 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846581 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846585 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846589 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846594 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846600 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846606 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846612 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846619 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846624 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846630 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846635 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846640 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846645 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846650 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846654 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846660 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846664 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846668 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846673 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846677 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846681 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846685 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846689 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846692 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846696 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846700 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846705 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.846713 4786 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846822 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846829 4786 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846833 4786 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846838 4786 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846843 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846848 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846854 4786 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846858 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846861 4786 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846865 4786 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846869 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846873 4786 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846876 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846880 4786 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846884 4786 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846887 4786 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846891 4786 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846895 4786 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846899 4786 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846902 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846906 4786 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846910 4786 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846914 4786 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846917 4786 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846921 4786 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846925 4786 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846929 4786 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846933 4786 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846937 4786 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846941 4786 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846945 4786 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846950 4786 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846955 4786 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846959 4786 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846963 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846967 4786 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846971 4786 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846977 4786 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846981 4786 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846986 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846991 4786 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846995 4786 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.846999 4786 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847003 4786 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847006 4786 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847010 4786 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847014 4786 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847018 4786 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847022 4786 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847026 4786 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847030 4786 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847035 4786 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847039 4786 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847043 4786 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847047 4786 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847050 4786 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847054 4786 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847058 4786 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847062 4786 feature_gate.go:330] unrecognized feature gate: Example Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847066 4786 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847069 4786 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847073 4786 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847077 4786 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847081 4786 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847084 4786 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847088 4786 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847092 4786 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847095 4786 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847099 4786 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847102 4786 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 09 08:43:54 crc kubenswrapper[4786]: W1209 08:43:54.847106 4786 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.847114 4786 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.847631 4786 server.go:940] "Client rotation is on, will bootstrap in background" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.850228 4786 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.850333 4786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.850890 4786 server.go:997] "Starting client certificate rotation" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.850916 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.851120 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-03 08:21:53.268172236 +0000 UTC Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.851279 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.882441 4786 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.884015 4786 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 08:43:54 crc kubenswrapper[4786]: E1209 08:43:54.884022 4786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.950647 4786 log.go:25] "Validated CRI v1 runtime API" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.969500 4786 log.go:25] "Validated CRI v1 image API" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.971166 4786 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.974249 4786 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-09-08-39-10-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 09 08:43:54 crc kubenswrapper[4786]: I1209 08:43:54.974297 4786 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.032355 4786 manager.go:217] Machine: {Timestamp:2025-12-09 08:43:55.026745083 +0000 UTC m=+0.910366389 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d02a19a2-cd20-41fc-84e9-65362968df1f BootID:20132b6b-eea8-47f7-95fa-f658d05fe362 Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:dd:d7:59 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:dd:d7:59 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0c:65:24 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:cd:31:ef Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:7d:6e:e9 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:9d:9a:b1 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:9e:01:18:e6:ba:97 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:2e:55:70:c5:78:63 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.032996 4786 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.033260 4786 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.035579 4786 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.036029 4786 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.036140 4786 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.036817 4786 topology_manager.go:138] "Creating topology manager with none policy" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.036844 4786 container_manager_linux.go:303] "Creating device plugin manager" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.037230 4786 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.037297 4786 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.037745 4786 state_mem.go:36] "Initialized new in-memory state store" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.037962 4786 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.041077 4786 kubelet.go:418] "Attempting to sync node with API server" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.041132 4786 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.041225 4786 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.041256 4786 kubelet.go:324] "Adding apiserver pod source" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.041340 4786 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.045117 4786 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.045804 4786 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 09 08:43:55 crc kubenswrapper[4786]: W1209 08:43:55.046591 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.046746 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:55 crc kubenswrapper[4786]: W1209 08:43:55.046775 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.046837 4786 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.047085 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048053 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048103 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048125 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048142 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048165 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048178 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048192 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048214 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048230 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048258 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048286 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048300 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.048670 4786 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.049679 4786 server.go:1280] "Started kubelet" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.050554 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.050628 4786 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.050638 4786 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.051576 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.051673 4786 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.051674 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 13:13:18.987560981 +0000 UTC Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.051729 4786 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 4h29m23.935834659s for next certificate rotation Dec 09 08:43:55 crc systemd[1]: Started Kubernetes Kubelet. Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.051903 4786 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.051984 4786 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.052051 4786 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.052065 4786 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.052159 4786 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 09 08:43:55 crc kubenswrapper[4786]: W1209 08:43:55.053186 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.104705 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.106779 4786 factory.go:55] Registering systemd factory Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.106838 4786 factory.go:221] Registration of the systemd container factory successfully Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.090793 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.129.56.245:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f7f8fead85d1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 08:43:55.049622811 +0000 UTC m=+0.933244097,LastTimestamp:2025-12-09 08:43:55.049622811 +0000 UTC m=+0.933244097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.107752 4786 server.go:460] "Adding debug handlers to kubelet server" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.115459 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="200ms" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.115809 4786 factory.go:153] Registering CRI-O factory Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.116120 4786 factory.go:221] Registration of the crio container factory successfully Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.116247 4786 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.116347 4786 factory.go:103] Registering Raw factory Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.116386 4786 manager.go:1196] Started watching for new ooms in manager Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.117977 4786 manager.go:319] Starting recovery of all containers Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123618 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123710 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123726 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123753 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123767 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123781 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123818 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123844 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123871 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123901 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123926 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.123952 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124013 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124043 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124069 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124092 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124116 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124162 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124177 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124193 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124206 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124222 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124236 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124250 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124263 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124286 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124338 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124368 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124390 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124458 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124499 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124523 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124545 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124560 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124574 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124625 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124675 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124689 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124701 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124723 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124745 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124872 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.124938 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125108 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125223 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125287 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125312 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125334 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125449 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125477 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125500 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125529 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125641 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125709 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125740 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125803 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125837 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125866 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125892 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125915 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125955 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125971 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.125987 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126002 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126015 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126029 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126044 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126057 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126073 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126086 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126124 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126137 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126152 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126168 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126206 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126234 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126286 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126298 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126308 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126318 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126338 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126355 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126380 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126399 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126418 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126470 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126535 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126555 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126581 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126620 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126634 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126645 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126654 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126664 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126674 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126707 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126742 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.126795 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.127756 4786 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.127873 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128133 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128183 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128198 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128211 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128225 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128246 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128474 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128550 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128576 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128598 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128847 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128871 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128888 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128907 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128928 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.128947 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129141 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129160 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129171 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129181 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129191 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129200 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129212 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129221 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129232 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129243 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129255 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129447 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129482 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129503 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129513 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129522 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129531 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129541 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129551 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129572 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129783 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129797 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129809 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129820 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129829 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129839 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129860 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129881 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129900 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129937 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129951 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129960 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.129994 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130005 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130015 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130025 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130034 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130047 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130057 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130068 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130101 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130112 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130122 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130132 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130151 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130169 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130187 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130206 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130226 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130269 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130282 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130292 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130301 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130313 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130327 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130343 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130357 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130370 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130382 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130390 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130448 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130469 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130488 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130505 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130516 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130533 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130545 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130596 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130617 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130633 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130651 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130674 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130692 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130716 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130794 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130813 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130822 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130833 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130852 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130870 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130888 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130906 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.130973 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131001 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131021 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131040 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131059 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131105 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131123 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131142 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131184 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131195 4786 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131208 4786 reconstruct.go:97] "Volume reconstruction finished" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.131268 4786 reconciler.go:26] "Reconciler: start to sync state" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.147565 4786 manager.go:324] Recovery completed Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.152228 4786 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.156751 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.171462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.171521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.171534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.176518 4786 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.176547 4786 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.176573 4786 state_mem.go:36] "Initialized new in-memory state store" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.184143 4786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.185823 4786 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.186121 4786 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.186243 4786 kubelet.go:2335] "Starting kubelet main sync loop" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.186853 4786 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.187681 4786 policy_none.go:49] "None policy: Start" Dec 09 08:43:55 crc kubenswrapper[4786]: W1209 08:43:55.187173 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.196970 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.198891 4786 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.198942 4786 state_mem.go:35] "Initializing new in-memory state store" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.252813 4786 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.253548 4786 manager.go:334] "Starting Device Plugin manager" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.253606 4786 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.253619 4786 server.go:79] "Starting device plugin registration server" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.254794 4786 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.254849 4786 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.255567 4786 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.255934 4786 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.255964 4786 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.287532 4786 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.287736 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.317485 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="400ms" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.355404 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.368843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.368890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.368901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.369753 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.370008 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.370153 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.371564 4786 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.373654 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.373704 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.373720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.373966 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.374016 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.374056 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.374263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.374308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.374322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.374378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.374399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.374437 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.375241 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.375650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.375669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.375677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.375787 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.375938 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.375970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.375982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.376097 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.376176 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.376200 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.376554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.376568 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.376576 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.376643 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.376811 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.376899 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.376900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.377024 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.377034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.377227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.377271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.377288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.377540 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.377587 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.380268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.380300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.380312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.380395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.380408 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.380415 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.435026 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.435063 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.435090 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.535951 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536039 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536195 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536231 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536259 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536277 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536300 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536320 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536341 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536477 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536474 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536495 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536548 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536695 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536799 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536928 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536962 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.536979 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.576622 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.577939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.577997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.578016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.578047 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.578739 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.637861 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.637920 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.637941 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.637958 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.637973 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.637992 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638011 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638028 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638030 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638086 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638043 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638099 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638133 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638137 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638173 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638179 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638172 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638155 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638211 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638210 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638220 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638223 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638324 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.638341 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.714947 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.718834 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="800ms" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.734207 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.757025 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: W1209 08:43:55.757095 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-339c9c2990316d3242d291d7427d08c25d5964d8fc99ed8432052ba8e0b92fd2 WatchSource:0}: Error finding container 339c9c2990316d3242d291d7427d08c25d5964d8fc99ed8432052ba8e0b92fd2: Status 404 returned error can't find the container with id 339c9c2990316d3242d291d7427d08c25d5964d8fc99ed8432052ba8e0b92fd2 Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.772083 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: W1209 08:43:55.777839 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-7210fa11a5e700fa49b43bfde4d8e7d9d68a65d803596dccad9bbe24690067fd WatchSource:0}: Error finding container 7210fa11a5e700fa49b43bfde4d8e7d9d68a65d803596dccad9bbe24690067fd: Status 404 returned error can't find the container with id 7210fa11a5e700fa49b43bfde4d8e7d9d68a65d803596dccad9bbe24690067fd Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.780551 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 09 08:43:55 crc kubenswrapper[4786]: W1209 08:43:55.797610 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-b81a439fc0c9012af85e742e38900abc5678324c30ecba1622b8f6cbae018c77 WatchSource:0}: Error finding container b81a439fc0c9012af85e742e38900abc5678324c30ecba1622b8f6cbae018c77: Status 404 returned error can't find the container with id b81a439fc0c9012af85e742e38900abc5678324c30ecba1622b8f6cbae018c77 Dec 09 08:43:55 crc kubenswrapper[4786]: W1209 08:43:55.806997 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-f56e2db281feb874f5c50500c6f3fecee5fddc5fbbfcfbfa357deb8b20e21415 WatchSource:0}: Error finding container f56e2db281feb874f5c50500c6f3fecee5fddc5fbbfcfbfa357deb8b20e21415: Status 404 returned error can't find the container with id f56e2db281feb874f5c50500c6f3fecee5fddc5fbbfcfbfa357deb8b20e21415 Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.979826 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.983705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.983769 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.983790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:55 crc kubenswrapper[4786]: I1209 08:43:55.983827 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 08:43:55 crc kubenswrapper[4786]: E1209 08:43:55.984448 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.051941 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.207794 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"3af819ce39c2c52c8191cbae89c90d15a9a1493d1512dfd6dab62707435cc8d7"} Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.209119 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"339c9c2990316d3242d291d7427d08c25d5964d8fc99ed8432052ba8e0b92fd2"} Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.210494 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"f56e2db281feb874f5c50500c6f3fecee5fddc5fbbfcfbfa357deb8b20e21415"} Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.212186 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b81a439fc0c9012af85e742e38900abc5678324c30ecba1622b8f6cbae018c77"} Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.213630 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7210fa11a5e700fa49b43bfde4d8e7d9d68a65d803596dccad9bbe24690067fd"} Dec 09 08:43:56 crc kubenswrapper[4786]: W1209 08:43:56.240756 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:56 crc kubenswrapper[4786]: E1209 08:43:56.240864 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:56 crc kubenswrapper[4786]: W1209 08:43:56.314099 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:56 crc kubenswrapper[4786]: E1209 08:43:56.314195 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:56 crc kubenswrapper[4786]: W1209 08:43:56.487833 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:56 crc kubenswrapper[4786]: E1209 08:43:56.487922 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:56 crc kubenswrapper[4786]: E1209 08:43:56.520043 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="1.6s" Dec 09 08:43:56 crc kubenswrapper[4786]: W1209 08:43:56.546139 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:56 crc kubenswrapper[4786]: E1209 08:43:56.546274 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.785106 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.786901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.786954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.786964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.786992 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 08:43:56 crc kubenswrapper[4786]: E1209 08:43:56.787527 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Dec 09 08:43:56 crc kubenswrapper[4786]: I1209 08:43:56.981944 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 08:43:56 crc kubenswrapper[4786]: E1209 08:43:56.983229 4786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.051562 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.225760 4786 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c" exitCode=0 Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.225985 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.226307 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c"} Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.226923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.226949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.226958 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.229979 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae"} Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.230039 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5"} Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.230055 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867"} Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.231938 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341" exitCode=0 Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.232004 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341"} Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.232104 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.232982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.233045 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.233059 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.234568 4786 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5" exitCode=0 Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.234663 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.234666 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5"} Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.235026 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.235728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.235761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.235773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.235980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.236002 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.236014 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.237104 4786 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44" exitCode=0 Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.237141 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44"} Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.237251 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.238341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.238381 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:57 crc kubenswrapper[4786]: I1209 08:43:57.238399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.051956 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:58 crc kubenswrapper[4786]: E1209 08:43:58.121322 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="3.2s" Dec 09 08:43:58 crc kubenswrapper[4786]: W1209 08:43:58.143056 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:58 crc kubenswrapper[4786]: E1209 08:43:58.143138 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.267061 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507"} Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.267134 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.268233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.268275 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.268288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.270117 4786 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f" exitCode=0 Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.270221 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f"} Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.270310 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.271643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.271715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.271728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.271943 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1"} Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.277565 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c"} Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.277682 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.279059 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.279104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.279118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.281128 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b"} Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.387889 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.389250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.389295 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.389304 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.389329 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 08:43:58 crc kubenswrapper[4786]: E1209 08:43:58.389945 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.129.56.245:6443: connect: connection refused" node="crc" Dec 09 08:43:58 crc kubenswrapper[4786]: W1209 08:43:58.675118 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:58 crc kubenswrapper[4786]: E1209 08:43:58.675205 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:58 crc kubenswrapper[4786]: W1209 08:43:58.707644 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:58 crc kubenswrapper[4786]: E1209 08:43:58.707749 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.965756 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.966094 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 09 08:43:58 crc kubenswrapper[4786]: I1209 08:43:58.966159 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.051750 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.219083 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:43:59 crc kubenswrapper[4786]: W1209 08:43:59.241900 4786 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:43:59 crc kubenswrapper[4786]: E1209 08:43:59.242032 4786 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.129.56.245:6443: connect: connection refused" logger="UnhandledError" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.312633 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4"} Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.312710 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac"} Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.312727 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58"} Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.458120 4786 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3" exitCode=0 Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.458266 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3"} Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.458485 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.509595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.509650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.509662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.512914 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa"} Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.512986 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661"} Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.513059 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.513108 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.513059 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.522491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.522579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.522592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.522884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.522954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.522966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.523971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.524046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:43:59 crc kubenswrapper[4786]: I1209 08:43:59.524060 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.051283 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.129.56.245:6443: connect: connection refused Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.522554 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e49c5c23fc6ac0350de2fc1e2cdd01a326ff50bf4bda4a326e99abf9415ff82c"} Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.522599 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ae9f3bf4166653ae69b3f01c8cb49b5a8eab5f682cb50f4e8c62973615693822"} Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.522609 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"29040d6b65393bf5f762155aba99cda093a423961bd73adb30abccbcbd2fd105"} Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.525592 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3ebab9d599495fb35c0a606f0c5d7bf4d36cf20a69f9b7d671172ea16bcdbc8a"} Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.525644 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.525688 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.525712 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.525807 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.526467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.526497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.526507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.526714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.526785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.526803 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.527468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.527502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:00 crc kubenswrapper[4786]: I1209 08:44:00.527520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.316520 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.533904 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"74b17422905bc82f40afb7f46066985ffa94d33cba16d2fd3328bd6aa0df365b"} Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.533965 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.533977 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.533983 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"941d38be9a1314a3460bc00d016f09a53b7634ce19d321301b9bbc8943d94b87"} Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.534190 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.535089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.535141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.535153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.535172 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.535198 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.535209 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.590652 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.592572 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.592625 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.592638 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:01 crc kubenswrapper[4786]: I1209 08:44:01.592667 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 08:44:02 crc kubenswrapper[4786]: I1209 08:44:02.536799 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:02 crc kubenswrapper[4786]: I1209 08:44:02.536928 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:02 crc kubenswrapper[4786]: I1209 08:44:02.539931 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:02 crc kubenswrapper[4786]: I1209 08:44:02.539989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:02 crc kubenswrapper[4786]: I1209 08:44:02.540004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:02 crc kubenswrapper[4786]: I1209 08:44:02.541247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:02 crc kubenswrapper[4786]: I1209 08:44:02.541300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:02 crc kubenswrapper[4786]: I1209 08:44:02.541313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.068733 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.453020 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.539764 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.541413 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.542547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.542595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.542608 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.543025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.543068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.543079 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.789844 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.790097 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.791891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.791966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:03 crc kubenswrapper[4786]: I1209 08:44:03.791982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.115483 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.352122 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.353065 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.354624 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.355250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.355332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.355360 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.542785 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.542786 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.544570 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.544656 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.544683 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.544733 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.544770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:04 crc kubenswrapper[4786]: I1209 08:44:04.544787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:05 crc kubenswrapper[4786]: I1209 08:44:05.364609 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:44:05 crc kubenswrapper[4786]: I1209 08:44:05.364861 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:05 crc kubenswrapper[4786]: I1209 08:44:05.366571 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:05 crc kubenswrapper[4786]: I1209 08:44:05.366636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:05 crc kubenswrapper[4786]: I1209 08:44:05.366646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:05 crc kubenswrapper[4786]: E1209 08:44:05.372148 4786 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 08:44:07 crc kubenswrapper[4786]: I1209 08:44:07.352861 4786 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 09 08:44:07 crc kubenswrapper[4786]: I1209 08:44:07.352972 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 08:44:08 crc kubenswrapper[4786]: I1209 08:44:08.972496 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:44:08 crc kubenswrapper[4786]: I1209 08:44:08.972766 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:08 crc kubenswrapper[4786]: I1209 08:44:08.974842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:08 crc kubenswrapper[4786]: I1209 08:44:08.974895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:08 crc kubenswrapper[4786]: I1209 08:44:08.974937 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:08 crc kubenswrapper[4786]: I1209 08:44:08.979022 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:44:09 crc kubenswrapper[4786]: I1209 08:44:09.558394 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:09 crc kubenswrapper[4786]: I1209 08:44:09.560047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:09 crc kubenswrapper[4786]: I1209 08:44:09.560182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:09 crc kubenswrapper[4786]: I1209 08:44:09.560256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:11 crc kubenswrapper[4786]: I1209 08:44:11.053205 4786 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 09 08:44:11 crc kubenswrapper[4786]: E1209 08:44:11.318685 4786 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 09 08:44:11 crc kubenswrapper[4786]: E1209 08:44:11.322212 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Dec 09 08:44:11 crc kubenswrapper[4786]: I1209 08:44:11.392919 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43110->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 08:44:11 crc kubenswrapper[4786]: I1209 08:44:11.393025 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:43110->192.168.126.11:17697: read: connection reset by peer" Dec 09 08:44:11 crc kubenswrapper[4786]: I1209 08:44:11.392919 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40430->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 09 08:44:11 crc kubenswrapper[4786]: I1209 08:44:11.393117 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:40430->192.168.126.11:17697: read: connection reset by peer" Dec 09 08:44:11 crc kubenswrapper[4786]: E1209 08:44:11.594082 4786 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 09 08:44:11 crc kubenswrapper[4786]: I1209 08:44:11.786279 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 08:44:11 crc kubenswrapper[4786]: I1209 08:44:11.786474 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 08:44:11 crc kubenswrapper[4786]: I1209 08:44:11.792511 4786 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 09 08:44:11 crc kubenswrapper[4786]: I1209 08:44:11.792622 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 09 08:44:12 crc kubenswrapper[4786]: I1209 08:44:12.570437 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 08:44:12 crc kubenswrapper[4786]: I1209 08:44:12.573255 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3ebab9d599495fb35c0a606f0c5d7bf4d36cf20a69f9b7d671172ea16bcdbc8a" exitCode=255 Dec 09 08:44:12 crc kubenswrapper[4786]: I1209 08:44:12.573337 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3ebab9d599495fb35c0a606f0c5d7bf4d36cf20a69f9b7d671172ea16bcdbc8a"} Dec 09 08:44:12 crc kubenswrapper[4786]: I1209 08:44:12.573727 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:12 crc kubenswrapper[4786]: I1209 08:44:12.574772 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:12 crc kubenswrapper[4786]: I1209 08:44:12.574891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:12 crc kubenswrapper[4786]: I1209 08:44:12.574964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:12 crc kubenswrapper[4786]: I1209 08:44:12.575908 4786 scope.go:117] "RemoveContainer" containerID="3ebab9d599495fb35c0a606f0c5d7bf4d36cf20a69f9b7d671172ea16bcdbc8a" Dec 09 08:44:13 crc kubenswrapper[4786]: I1209 08:44:13.578870 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 08:44:13 crc kubenswrapper[4786]: I1209 08:44:13.581502 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e"} Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.122204 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.388607 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.388996 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.391034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.391096 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.391115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.407228 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.587199 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.587630 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.589415 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e" exitCode=255 Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.589684 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.590488 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.590541 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e"} Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.590628 4786 scope.go:117] "RemoveContainer" containerID="3ebab9d599495fb35c0a606f0c5d7bf4d36cf20a69f9b7d671172ea16bcdbc8a" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.593368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.593409 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.593434 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.593560 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.593596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.593611 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.594276 4786 scope.go:117] "RemoveContainer" containerID="ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e" Dec 09 08:44:14 crc kubenswrapper[4786]: E1209 08:44:14.594619 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 09 08:44:14 crc kubenswrapper[4786]: I1209 08:44:14.596578 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:44:15 crc kubenswrapper[4786]: E1209 08:44:15.373038 4786 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 09 08:44:15 crc kubenswrapper[4786]: I1209 08:44:15.593845 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 09 08:44:15 crc kubenswrapper[4786]: I1209 08:44:15.595679 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:15 crc kubenswrapper[4786]: I1209 08:44:15.596496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:15 crc kubenswrapper[4786]: I1209 08:44:15.596528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:15 crc kubenswrapper[4786]: I1209 08:44:15.596537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:15 crc kubenswrapper[4786]: I1209 08:44:15.597177 4786 scope.go:117] "RemoveContainer" containerID="ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e" Dec 09 08:44:15 crc kubenswrapper[4786]: E1209 08:44:15.597374 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.597817 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.598764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.598894 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.598975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.599689 4786 scope.go:117] "RemoveContainer" containerID="ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e" Dec 09 08:44:16 crc kubenswrapper[4786]: E1209 08:44:16.599940 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.789593 4786 trace.go:236] Trace[1985590657]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 08:44:03.617) (total time: 13171ms): Dec 09 08:44:16 crc kubenswrapper[4786]: Trace[1985590657]: ---"Objects listed" error: 13171ms (08:44:16.789) Dec 09 08:44:16 crc kubenswrapper[4786]: Trace[1985590657]: [13.171534925s] [13.171534925s] END Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.789666 4786 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.789593 4786 trace.go:236] Trace[1378048355]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 08:44:02.899) (total time: 13890ms): Dec 09 08:44:16 crc kubenswrapper[4786]: Trace[1378048355]: ---"Objects listed" error: 13890ms (08:44:16.789) Dec 09 08:44:16 crc kubenswrapper[4786]: Trace[1378048355]: [13.890423047s] [13.890423047s] END Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.789790 4786 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.789593 4786 trace.go:236] Trace[617702330]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 08:44:02.926) (total time: 13863ms): Dec 09 08:44:16 crc kubenswrapper[4786]: Trace[617702330]: ---"Objects listed" error: 13863ms (08:44:16.789) Dec 09 08:44:16 crc kubenswrapper[4786]: Trace[617702330]: [13.863294594s] [13.863294594s] END Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.789856 4786 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.793587 4786 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.794832 4786 trace.go:236] Trace[1778864499]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (09-Dec-2025 08:44:02.177) (total time: 14617ms): Dec 09 08:44:16 crc kubenswrapper[4786]: Trace[1778864499]: ---"Objects listed" error: 14617ms (08:44:16.794) Dec 09 08:44:16 crc kubenswrapper[4786]: Trace[1778864499]: [14.617736874s] [14.617736874s] END Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.795126 4786 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.849510 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:44:16 crc kubenswrapper[4786]: I1209 08:44:16.854765 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.057670 4786 apiserver.go:52] "Watching apiserver" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.060674 4786 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.061066 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-dns/node-resolver-mq8pp","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h"] Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.061519 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.061596 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.061652 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.061777 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.061875 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.062062 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.062161 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.062512 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.062621 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mq8pp" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.062662 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.065179 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.066884 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.067006 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.067649 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.067725 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.067993 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.068329 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.068670 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.070253 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.070500 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.070668 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.071000 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.084087 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.102580 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.113401 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.122844 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.135939 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.148530 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.153027 4786 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.157797 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.168806 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.181316 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.190501 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196102 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196147 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196170 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196189 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196212 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196227 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196243 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196259 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196274 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196333 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196370 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196385 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196402 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196417 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196455 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196469 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196486 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196503 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196537 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196555 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196576 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196593 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196609 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196624 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196602 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196643 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196659 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196689 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196710 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196726 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196744 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196765 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196792 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196812 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196828 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196844 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196860 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196882 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196904 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196935 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196975 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.196998 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197017 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197042 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197079 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197109 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197135 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197157 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197176 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197198 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197220 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197240 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197274 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197297 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197318 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197339 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197359 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197384 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197406 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197447 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197490 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197518 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197543 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197565 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197587 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197611 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197624 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197636 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197661 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197705 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197719 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197733 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197758 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197783 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197800 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197809 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197861 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197860 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197893 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197915 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197935 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197957 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197977 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.197994 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198009 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198026 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198043 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198059 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198060 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198079 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198097 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198115 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198133 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198149 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198155 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198165 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198183 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198198 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198216 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198233 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198251 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198269 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198286 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198336 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198358 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198390 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198409 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198448 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198471 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198490 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198510 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198529 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198549 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198568 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198585 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198600 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198616 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198631 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198648 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198665 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198756 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198775 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198793 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198809 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198824 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198842 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198858 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198875 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198894 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198911 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198929 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198946 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198964 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198980 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198995 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199012 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199030 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199046 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199067 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199090 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199112 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199135 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199152 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199169 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199189 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199207 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199231 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199248 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199265 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199281 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199297 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199319 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199336 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199352 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199368 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200106 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200146 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200172 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200198 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200218 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200235 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200255 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200277 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200298 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200316 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200333 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200351 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200371 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200388 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200405 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200439 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200458 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200475 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200495 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200511 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200529 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200545 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200562 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200578 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200606 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200622 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200640 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200660 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200678 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200695 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200714 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200731 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200752 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200770 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200787 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200805 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200822 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200838 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200854 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200871 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200888 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200906 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200922 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200950 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200968 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.200987 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201005 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201023 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201098 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201122 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201154 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201172 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9a680f02-0367-4814-9c73-aa5959bf952f-hosts-file\") pod \"node-resolver-mq8pp\" (UID: \"9a680f02-0367-4814-9c73-aa5959bf952f\") " pod="openshift-dns/node-resolver-mq8pp" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201197 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201220 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201242 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201262 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201280 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201303 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fg6\" (UniqueName: \"kubernetes.io/projected/9a680f02-0367-4814-9c73-aa5959bf952f-kube-api-access-98fg6\") pod \"node-resolver-mq8pp\" (UID: \"9a680f02-0367-4814-9c73-aa5959bf952f\") " pod="openshift-dns/node-resolver-mq8pp" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201322 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201343 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201368 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201386 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201408 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201462 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201581 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201600 4786 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201616 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201631 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201646 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201658 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201671 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.202058 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198266 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.222697 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198231 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198490 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198489 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198520 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198761 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.198890 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199052 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199178 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199297 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199313 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199740 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.199865 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201130 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201226 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201678 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201709 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201732 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.201764 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.202328 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.202354 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.202393 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.202670 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.202762 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.203470 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.203912 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.204523 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.204851 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205037 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205207 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205235 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205290 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205299 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205388 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205502 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205692 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205800 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.205968 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.206457 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.206678 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.206897 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.207363 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.207451 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.207981 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.208016 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.208140 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.208207 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.208419 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.208724 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.208774 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.208918 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.208938 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.209327 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.209455 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.209645 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.209950 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.210414 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.211679 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.211893 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.212082 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.212325 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.212533 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.213170 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.213258 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.214072 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.214447 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.214895 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.215124 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.215139 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.216272 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.217327 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.219231 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.219258 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.219595 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.219810 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.220008 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.220464 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.220584 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.220671 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.220779 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.220804 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.220823 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.220933 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.221362 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.221733 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.222067 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.222589 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.222615 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.222729 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.222775 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.223205 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224055 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.223384 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224106 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.223531 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.223539 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.223813 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.223828 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.223888 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224035 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224279 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224294 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224376 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224484 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224739 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224458 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224840 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224938 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.224948 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225015 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225155 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225224 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225273 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225384 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225388 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225407 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.225502 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:44:17.72546622 +0000 UTC m=+23.609087446 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225477 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225521 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225487 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225585 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225702 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225704 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225723 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225705 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225776 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225807 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225852 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225888 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225942 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.225490 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.226237 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.226510 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.226520 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.226517 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.226577 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.226804 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.226881 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.227073 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.227335 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.227582 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.227790 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.228028 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.228344 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.228412 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.228442 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.228863 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.228874 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.228988 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.229030 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.229269 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.229287 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:17.729267123 +0000 UTC m=+23.612888419 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.229578 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.229770 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.229854 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:17.729831976 +0000 UTC m=+23.613453262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.229856 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.229916 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.229979 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.230275 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.230326 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.230414 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.230523 4786 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.230562 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.241055 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.241395 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.243282 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.287874 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.289421 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.289641 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.289741 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.290212 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.297061 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.302987 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303133 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9a680f02-0367-4814-9c73-aa5959bf952f-hosts-file\") pod \"node-resolver-mq8pp\" (UID: \"9a680f02-0367-4814-9c73-aa5959bf952f\") " pod="openshift-dns/node-resolver-mq8pp" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303216 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303306 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98fg6\" (UniqueName: \"kubernetes.io/projected/9a680f02-0367-4814-9c73-aa5959bf952f-kube-api-access-98fg6\") pod \"node-resolver-mq8pp\" (UID: \"9a680f02-0367-4814-9c73-aa5959bf952f\") " pod="openshift-dns/node-resolver-mq8pp" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303420 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303589 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303653 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303308 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303184 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/9a680f02-0367-4814-9c73-aa5959bf952f-hosts-file\") pod \"node-resolver-mq8pp\" (UID: \"9a680f02-0367-4814-9c73-aa5959bf952f\") " pod="openshift-dns/node-resolver-mq8pp" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303708 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303851 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303910 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303966 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304017 4786 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304068 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304122 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304178 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304236 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304295 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304366 4786 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304448 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304514 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304569 4786 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304621 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304672 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304730 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304786 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304844 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304898 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.304952 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305005 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305056 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305109 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305168 4786 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305221 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305277 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305334 4786 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305390 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305464 4786 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305522 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305579 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305640 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305697 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305752 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305814 4786 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305869 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305932 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.305985 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306036 4786 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306095 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306153 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306205 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306260 4786 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306317 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306379 4786 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306452 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306512 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306565 4786 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306620 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306677 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306733 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306793 4786 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306850 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306906 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.306958 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307009 4786 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307066 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307125 4786 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307177 4786 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307229 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307284 4786 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307336 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307395 4786 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307498 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307561 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307622 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307681 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307747 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307839 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307921 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.307988 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.303104 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308051 4786 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308103 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308116 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308126 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308136 4786 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308145 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308155 4786 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308163 4786 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308171 4786 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308180 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308189 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308197 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308206 4786 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308214 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308222 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308230 4786 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308238 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308246 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308254 4786 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308264 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308272 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308281 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308312 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308323 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308333 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308342 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308350 4786 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308360 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308369 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308385 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308393 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308400 4786 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308408 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308416 4786 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308442 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308454 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308462 4786 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308470 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308477 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308488 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308496 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308505 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308513 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308521 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308530 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308537 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308545 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308553 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308561 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308569 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308576 4786 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308647 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308661 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308669 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308678 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308688 4786 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308696 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308703 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308711 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308719 4786 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308727 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308735 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308742 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308751 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308759 4786 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308767 4786 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308775 4786 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308783 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308791 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308799 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308807 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308815 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308823 4786 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308830 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308839 4786 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308846 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308854 4786 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308863 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308871 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308879 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308887 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308895 4786 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308902 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308911 4786 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308919 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308927 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308934 4786 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308942 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.308949 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.311436 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.344278 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.344342 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.344502 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.344631 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.344760 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.345133 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.345143 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.345273 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.346294 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.346554 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.346605 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.346656 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.346845 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.346853 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.346959 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.347191 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.347657 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.348032 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.348269 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.357823 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.358118 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.358810 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.381266 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.387537 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.388154 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98fg6\" (UniqueName: \"kubernetes.io/projected/9a680f02-0367-4814-9c73-aa5959bf952f-kube-api-access-98fg6\") pod \"node-resolver-mq8pp\" (UID: \"9a680f02-0367-4814-9c73-aa5959bf952f\") " pod="openshift-dns/node-resolver-mq8pp" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.391564 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-mq8pp" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.410959 4786 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411216 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411225 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411233 4786 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411243 4786 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411253 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411261 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411270 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411278 4786 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411286 4786 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411294 4786 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411303 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411310 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411319 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411326 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411335 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411343 4786 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411351 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411359 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411368 4786 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411376 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411386 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.411396 4786 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.412349 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.412483 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.412550 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.412679 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:17.912654503 +0000 UTC m=+23.796275799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.412789 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.412814 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.412826 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.412876 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:17.912858938 +0000 UTC m=+23.796480164 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.470342 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-86k5n"] Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.470867 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-27hfj"] Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.509867 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.512483 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.513028 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-zbwb7"] Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.513378 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.515513 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.518486 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.519043 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.519278 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.521676 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.531052 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.531364 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.531389 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.531868 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.532139 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.532236 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.532794 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.532924 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.532969 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.560790 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.574904 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.592435 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.606304 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mq8pp" event={"ID":"9a680f02-0367-4814-9c73-aa5959bf952f","Type":"ContainerStarted","Data":"89f9f40c61fcdfd0822cb638c5694559a19e6865b8fba16e97b0052106ef11cb"} Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.606354 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"1210ba6c6b3d8bcd3bb9827ebc46c7256405a78684b65fecfb2daba5b469d9bf"} Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.608361 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.612264 4786 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.615350 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.619813 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-mcd-auth-proxy-config\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.619864 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-cni-dir\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.619887 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.619925 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-rootfs\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.619946 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-daemon-config\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.619968 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-os-release\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.620014 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcv4g\" (UniqueName: \"kubernetes.io/projected/a0a865e2-8504-473d-a23f-fc682d053a9f-kube-api-access-pcv4g\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.620037 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-conf-dir\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.620058 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-cnibin\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.620079 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-run-multus-certs\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.620113 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-etc-kubernetes\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.620137 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-system-cni-dir\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.620171 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-var-lib-kubelet\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.620209 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-cni-binary-copy\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.622863 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-cnibin\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.622915 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-os-release\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.622946 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.622978 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-proxy-tls\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623002 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-system-cni-dir\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623025 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2bms\" (UniqueName: \"kubernetes.io/projected/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-kube-api-access-p2bms\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623123 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wcct\" (UniqueName: \"kubernetes.io/projected/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-kube-api-access-9wcct\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623171 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-var-lib-cni-multus\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623191 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-hostroot\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623241 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a0a865e2-8504-473d-a23f-fc682d053a9f-cni-binary-copy\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623267 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-run-k8s-cni-cncf-io\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623282 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-var-lib-cni-bin\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623304 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-socket-dir-parent\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.623320 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-run-netns\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.633526 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.649627 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.674749 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.680071 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.692303 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.705845 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.719783 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.723933 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-var-lib-kubelet\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.723972 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-system-cni-dir\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724005 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-os-release\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724029 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-cni-binary-copy\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724049 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-cnibin\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724069 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724092 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-proxy-tls\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724099 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-var-lib-kubelet\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724185 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-system-cni-dir\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724114 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-system-cni-dir\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724203 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-os-release\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724223 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2bms\" (UniqueName: \"kubernetes.io/projected/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-kube-api-access-p2bms\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724284 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wcct\" (UniqueName: \"kubernetes.io/projected/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-kube-api-access-9wcct\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724587 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-cnibin\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724301 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-var-lib-cni-multus\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724666 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-hostroot\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724735 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-system-cni-dir\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724791 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-hostroot\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724759 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a0a865e2-8504-473d-a23f-fc682d053a9f-cni-binary-copy\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.724958 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-var-lib-cni-multus\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725003 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-run-k8s-cni-cncf-io\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725034 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-var-lib-cni-bin\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725061 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-socket-dir-parent\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725082 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-run-netns\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725079 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-run-k8s-cni-cncf-io\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725108 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-mcd-auth-proxy-config\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725114 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-var-lib-cni-bin\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725138 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-cni-dir\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725144 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-run-netns\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725217 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725231 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-socket-dir-parent\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725267 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-rootfs\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725281 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-cni-binary-copy\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725291 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-daemon-config\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725341 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-os-release\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725372 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pcv4g\" (UniqueName: \"kubernetes.io/projected/a0a865e2-8504-473d-a23f-fc682d053a9f-kube-api-access-pcv4g\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725397 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-conf-dir\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725419 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-cnibin\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725482 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-run-multus-certs\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725505 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-etc-kubernetes\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725564 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-etc-kubernetes\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725603 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-conf-dir\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725629 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725634 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-cnibin\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725409 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-cni-dir\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725658 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/a0a865e2-8504-473d-a23f-fc682d053a9f-host-run-multus-certs\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725683 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-rootfs\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725730 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-os-release\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725957 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/a0a865e2-8504-473d-a23f-fc682d053a9f-multus-daemon-config\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.725955 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-mcd-auth-proxy-config\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.727001 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/a0a865e2-8504-473d-a23f-fc682d053a9f-cni-binary-copy\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.728858 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.732735 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-proxy-tls\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.740543 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.742072 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pcv4g\" (UniqueName: \"kubernetes.io/projected/a0a865e2-8504-473d-a23f-fc682d053a9f-kube-api-access-pcv4g\") pod \"multus-27hfj\" (UID: \"a0a865e2-8504-473d-a23f-fc682d053a9f\") " pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.742911 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wcct\" (UniqueName: \"kubernetes.io/projected/60c502d4-7f9e-4d39-a197-fa70dc4a56d1-kube-api-access-9wcct\") pod \"machine-config-daemon-86k5n\" (UID: \"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\") " pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.745072 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2bms\" (UniqueName: \"kubernetes.io/projected/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-kube-api-access-p2bms\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.745113 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9e48cbd1-e567-45b7-902a-fb4a0daa2fd3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-zbwb7\" (UID: \"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\") " pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.753792 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.776090 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.787268 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.796296 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.805871 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.820208 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.825991 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.826077 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.826123 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.826161 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:44:18.826140883 +0000 UTC m=+24.709762109 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.826197 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.826234 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:18.826226065 +0000 UTC m=+24.709847291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.826238 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.826263 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:18.826257246 +0000 UTC m=+24.709878472 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.833063 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.839987 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7sr4q"] Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.841625 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.843983 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.844280 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.844355 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.844379 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.844448 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.844528 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.844523 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.845514 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.854527 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.856025 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.864079 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.873984 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.885481 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-27hfj" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.889125 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.893477 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.901907 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.913713 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.922537 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927442 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-bin\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927485 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-config\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927502 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-env-overrides\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927526 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-netns\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927542 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-node-log\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927557 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-log-socket\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927577 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927620 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-slash\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927652 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-openvswitch\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.927671 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.927686 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.927696 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927671 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-ovn\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.927734 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:18.927720345 +0000 UTC m=+24.811341571 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927749 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqff\" (UniqueName: \"kubernetes.io/projected/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-kube-api-access-ksqff\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927768 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-script-lib\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927802 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-ovn-kubernetes\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927821 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927838 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-netd\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927854 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927869 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-kubelet\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927885 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-var-lib-openvswitch\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927898 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-etc-openvswitch\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927913 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovn-node-metrics-cert\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927931 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-systemd-units\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.927946 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-systemd\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.928017 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.928030 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.928044 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:17 crc kubenswrapper[4786]: E1209 08:44:17.928089 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:18.928078514 +0000 UTC m=+24.811699740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.932717 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.942667 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.952491 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.968091 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.976397 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.994531 4786 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.995986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.996022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.996032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:17 crc kubenswrapper[4786]: I1209 08:44:17.996136 4786 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.003944 4786 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.004220 4786 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.005281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.005323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.005333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.005346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.005367 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.017412 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.020378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.020437 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.020452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.020477 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.020489 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.028813 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-systemd-units\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.028851 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-systemd\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.028869 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-bin\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.028902 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-config\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.028919 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-env-overrides\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.028942 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-node-log\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.028943 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-systemd-units\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.028968 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-netns\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029016 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-netns\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029033 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-log-socket\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029016 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-log-socket\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029062 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-slash\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029080 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-openvswitch\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029073 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-systemd\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029127 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-bin\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029167 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-ovn\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029186 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksqff\" (UniqueName: \"kubernetes.io/projected/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-kube-api-access-ksqff\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029206 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-script-lib\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029230 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-ovn-kubernetes\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029267 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-netd\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029285 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029302 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-kubelet\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029319 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-var-lib-openvswitch\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029333 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-etc-openvswitch\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029348 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovn-node-metrics-cert\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029613 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-node-log\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029650 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-kubelet\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029675 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-netd\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029697 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029702 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-var-lib-openvswitch\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029777 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-ovn-kubernetes\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029832 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-openvswitch\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029855 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-ovn\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029882 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-etc-openvswitch\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.029925 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-slash\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.030101 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-env-overrides\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.030244 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-config\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.030714 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-script-lib\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.031938 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.032109 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovn-node-metrics-cert\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.035376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.035400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.035410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.035444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.035457 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.046789 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksqff\" (UniqueName: \"kubernetes.io/projected/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-kube-api-access-ksqff\") pod \"ovnkube-node-7sr4q\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.048822 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.052543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.052649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.052790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.052899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.053007 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.063296 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.067324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.067372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.067384 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.067404 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.067417 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.077843 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.078045 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.080104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.080143 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.080152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.080167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.080176 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: W1209 08:44:18.155075 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0a865e2_8504_473d_a23f_fc682d053a9f.slice/crio-d2021ff18e5b707abd636e18760e2958510f56b03b15f655e8bda919b9f56ffa WatchSource:0}: Error finding container d2021ff18e5b707abd636e18760e2958510f56b03b15f655e8bda919b9f56ffa: Status 404 returned error can't find the container with id d2021ff18e5b707abd636e18760e2958510f56b03b15f655e8bda919b9f56ffa Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.155151 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:18 crc kubenswrapper[4786]: W1209 08:44:18.172591 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e48cbd1_e567_45b7_902a_fb4a0daa2fd3.slice/crio-cd70b95454e916f0ee93c2ed91857922b6593b96a33da751a99a01349aadb154 WatchSource:0}: Error finding container cd70b95454e916f0ee93c2ed91857922b6593b96a33da751a99a01349aadb154: Status 404 returned error can't find the container with id cd70b95454e916f0ee93c2ed91857922b6593b96a33da751a99a01349aadb154 Dec 09 08:44:18 crc kubenswrapper[4786]: W1209 08:44:18.174385 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8ebe4be_af09_4f22_9dee_af5f7d34bccf.slice/crio-543eeac1d6d5cfb5a9b577a58da08d0eee8f989f998d9a38d44d67f1fdc0e74f WatchSource:0}: Error finding container 543eeac1d6d5cfb5a9b577a58da08d0eee8f989f998d9a38d44d67f1fdc0e74f: Status 404 returned error can't find the container with id 543eeac1d6d5cfb5a9b577a58da08d0eee8f989f998d9a38d44d67f1fdc0e74f Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.188975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.189005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.189013 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.189026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.189039 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.291925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.291996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.292007 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.292023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.292034 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.415217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.415267 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.415282 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.415299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.415310 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.517698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.517984 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.517995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.518010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.518021 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.609399 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27hfj" event={"ID":"a0a865e2-8504-473d-a23f-fc682d053a9f","Type":"ContainerStarted","Data":"2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.609480 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27hfj" event={"ID":"a0a865e2-8504-473d-a23f-fc682d053a9f","Type":"ContainerStarted","Data":"d2021ff18e5b707abd636e18760e2958510f56b03b15f655e8bda919b9f56ffa"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.611531 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.611575 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.611590 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"b99523053db6d66389157667cadf04564682e095bd81edabdeeec5d655bf2e08"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.613567 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.613604 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"a35f4e368a8e6d797074dcec93cc14482dcb8fa2c013058cc16d0d884f85e426"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.614826 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a4f9f28fbcfbb43fb1b1a7435a20a3bf3de112d5bead6b556b71f2564ba2d609"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.616659 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.616700 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.620157 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c" exitCode=0 Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.620230 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.620263 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"543eeac1d6d5cfb5a9b577a58da08d0eee8f989f998d9a38d44d67f1fdc0e74f"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.621285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.621308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.621317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.621330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.621340 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.624887 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.625258 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerStarted","Data":"a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.625337 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerStarted","Data":"cd70b95454e916f0ee93c2ed91857922b6593b96a33da751a99a01349aadb154"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.628600 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-mq8pp" event={"ID":"9a680f02-0367-4814-9c73-aa5959bf952f","Type":"ContainerStarted","Data":"c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.640565 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.649865 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.661169 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.670847 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.679676 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.687413 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.697925 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.707676 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.721798 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.724161 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.724203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.724212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.724226 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.724236 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.732922 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.749081 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.759524 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.769592 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.779157 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.788684 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.797848 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.806859 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.814872 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.825742 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.827026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.827188 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.827299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.827395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.827535 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.835843 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.837817 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.837909 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.837938 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.838012 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:44:20.837982561 +0000 UTC m=+26.721603797 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.838055 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.838153 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:20.838142035 +0000 UTC m=+26.721763371 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.838217 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.838261 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:20.838252747 +0000 UTC m=+26.721874073 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.852180 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.859502 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.872733 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.930829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.930873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.930883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.930903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.930917 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:18Z","lastTransitionTime":"2025-12-09T08:44:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.938820 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:18 crc kubenswrapper[4786]: I1209 08:44:18.938880 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.939009 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.939024 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.939036 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.939041 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.939070 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.939081 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:20.93906827 +0000 UTC m=+26.822689496 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.939083 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:18 crc kubenswrapper[4786]: E1209 08:44:18.939137 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:20.939121531 +0000 UTC m=+26.822742757 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.033135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.033170 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.033179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.033194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.033203 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.136169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.136216 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.136227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.136242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.136253 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.189555 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:19 crc kubenswrapper[4786]: E1209 08:44:19.189698 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.190052 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:19 crc kubenswrapper[4786]: E1209 08:44:19.190122 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.190170 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:19 crc kubenswrapper[4786]: E1209 08:44:19.190225 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.192711 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.193887 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.195380 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.196081 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.197304 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.197975 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.198727 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.199739 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.200493 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.201672 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.202269 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.206964 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.207930 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.208698 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.209925 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.210594 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.211988 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.216251 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.217291 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.218054 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.219206 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.220076 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.220704 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.222003 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.222513 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.223787 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.224789 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.225904 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.226696 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.227630 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.228112 4786 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.228246 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.230780 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.231401 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.232174 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.237613 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.239196 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.239619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.239660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.239673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.239691 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.239701 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.239902 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.241385 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.242176 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.244991 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.246319 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.247239 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.248625 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.249355 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.250532 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.251230 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.252633 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.253264 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.254364 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.254957 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.256088 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.256914 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.257485 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.342690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.342732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.342744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.342763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.342776 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.445451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.445507 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.445517 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.445566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.445583 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.547764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.547807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.547820 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.547837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.547850 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.634953 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.635024 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.635037 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.649992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.650053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.650064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.650087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.650104 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.726352 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.737166 4786 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.751244 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.753247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.753296 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.753310 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.753329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.753342 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.762588 4786 scope.go:117] "RemoveContainer" containerID="ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e" Dec 09 08:44:19 crc kubenswrapper[4786]: E1209 08:44:19.762818 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.764876 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.855450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.855489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.855497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.855512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.855521 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.957591 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.957636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.957649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.957666 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:19 crc kubenswrapper[4786]: I1209 08:44:19.957680 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:19Z","lastTransitionTime":"2025-12-09T08:44:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.059909 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.059949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.059960 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.059977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.059989 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.163290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.163323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.163332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.163394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.163408 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.189068 4786 csr.go:261] certificate signing request csr-pfwqr is approved, waiting to be issued Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.200370 4786 csr.go:257] certificate signing request csr-pfwqr is issued Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.266606 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.266913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.266926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.266944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.266955 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.370258 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.370291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.370300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.370314 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.370364 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.472770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.472803 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.472811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.472825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.472833 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.575949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.575976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.575985 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.575998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.576008 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.650719 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.650779 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.650808 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.652283 4786 generic.go:334] "Generic (PLEG): container finished" podID="9e48cbd1-e567-45b7-902a-fb4a0daa2fd3" containerID="a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943" exitCode=0 Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.652329 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerDied","Data":"a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.657103 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.657775 4786 scope.go:117] "RemoveContainer" containerID="ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e" Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.657968 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.677917 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.679452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.679485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.679496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.679512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.679524 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.700757 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.717035 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.717619 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.729516 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.743644 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.759724 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.776231 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.783212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.783266 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.783276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.783289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.783297 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.791401 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.802695 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.814392 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.845179 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.861451 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.862270 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.862399 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.862510 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.862504 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:44:24.862480136 +0000 UTC m=+30.746101532 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.862682 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:24.862633579 +0000 UTC m=+30.746254805 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.862730 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.862919 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.862992 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:24.862983178 +0000 UTC m=+30.746604404 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.890105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.890172 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.890183 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.890210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.890222 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.914025 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.936090 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.964206 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.964262 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.964446 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.964464 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.964474 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.964501 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.964549 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.964562 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.964528 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:24.964514128 +0000 UTC m=+30.848135344 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:20 crc kubenswrapper[4786]: E1209 08:44:20.964745 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:24.964675542 +0000 UTC m=+30.848296768 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.965922 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.986757 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.992320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.992354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.992365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.992379 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:20 crc kubenswrapper[4786]: I1209 08:44:20.992388 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:20Z","lastTransitionTime":"2025-12-09T08:44:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.004474 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.030574 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.048630 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.062931 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.080905 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.096534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.096569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.096581 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.096596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.096606 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:21Z","lastTransitionTime":"2025-12-09T08:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.102363 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.115140 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.340084 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:21 crc kubenswrapper[4786]: E1209 08:44:21.340287 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.341072 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:21 crc kubenswrapper[4786]: E1209 08:44:21.341158 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.341250 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:21 crc kubenswrapper[4786]: E1209 08:44:21.341313 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.343470 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.343981 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-09 08:39:20 +0000 UTC, rotation deadline is 2026-10-17 09:20:29.286463778 +0000 UTC Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.344106 4786 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7488h36m7.942364067s for next certificate rotation Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.405362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.405406 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.405418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.405566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.405588 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:21Z","lastTransitionTime":"2025-12-09T08:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.418400 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.443265 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.509141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.509192 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.509205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.509225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.509239 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:21Z","lastTransitionTime":"2025-12-09T08:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.612970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.613054 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.613065 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.613106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.613124 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:21Z","lastTransitionTime":"2025-12-09T08:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.678753 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerStarted","Data":"9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6"} Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.680324 4786 scope.go:117] "RemoveContainer" containerID="ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e" Dec 09 08:44:21 crc kubenswrapper[4786]: E1209 08:44:21.680518 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.696900 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.714489 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.717434 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.717480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.717490 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.717508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.717519 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:21Z","lastTransitionTime":"2025-12-09T08:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.731944 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.749337 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.763980 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.779307 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.807113 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.822625 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.822710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.822722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.822743 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.822758 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:21Z","lastTransitionTime":"2025-12-09T08:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.823806 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.847655 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.861634 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.878141 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.901597 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.917061 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:21Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.931345 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.931417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.931445 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.931464 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:21 crc kubenswrapper[4786]: I1209 08:44:21.931476 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:21Z","lastTransitionTime":"2025-12-09T08:44:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.036345 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.036393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.036402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.036434 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.036444 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.140352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.140414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.140447 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.140467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.140481 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.243712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.243785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.243797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.243815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.243826 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.346839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.346886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.346917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.346943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.346957 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.449473 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.449562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.449580 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.449616 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.449640 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.552565 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.552611 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.552622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.552643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.552654 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.656764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.656839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.656851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.656871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.656884 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.661881 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-prw2c"] Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.662542 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.667858 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.667992 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.668107 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.670450 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.684636 4786 generic.go:334] "Generic (PLEG): container finished" podID="9e48cbd1-e567-45b7-902a-fb4a0daa2fd3" containerID="9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6" exitCode=0 Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.684704 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerDied","Data":"9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.685937 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.687847 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f37b1b5c-1cdc-4a08-9ea3-03dad00b5797-serviceca\") pod \"node-ca-prw2c\" (UID: \"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\") " pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.687878 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt6g4\" (UniqueName: \"kubernetes.io/projected/f37b1b5c-1cdc-4a08-9ea3-03dad00b5797-kube-api-access-pt6g4\") pod \"node-ca-prw2c\" (UID: \"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\") " pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.687919 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f37b1b5c-1cdc-4a08-9ea3-03dad00b5797-host\") pod \"node-ca-prw2c\" (UID: \"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\") " pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.701939 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.719459 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.738289 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.753984 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.761699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.761746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.761756 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.761775 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.761786 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.766349 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.783282 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.789034 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f37b1b5c-1cdc-4a08-9ea3-03dad00b5797-serviceca\") pod \"node-ca-prw2c\" (UID: \"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\") " pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.789076 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt6g4\" (UniqueName: \"kubernetes.io/projected/f37b1b5c-1cdc-4a08-9ea3-03dad00b5797-kube-api-access-pt6g4\") pod \"node-ca-prw2c\" (UID: \"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\") " pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.789128 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f37b1b5c-1cdc-4a08-9ea3-03dad00b5797-host\") pod \"node-ca-prw2c\" (UID: \"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\") " pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.789196 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f37b1b5c-1cdc-4a08-9ea3-03dad00b5797-host\") pod \"node-ca-prw2c\" (UID: \"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\") " pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.790146 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f37b1b5c-1cdc-4a08-9ea3-03dad00b5797-serviceca\") pod \"node-ca-prw2c\" (UID: \"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\") " pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.794778 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.806955 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.822582 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt6g4\" (UniqueName: \"kubernetes.io/projected/f37b1b5c-1cdc-4a08-9ea3-03dad00b5797-kube-api-access-pt6g4\") pod \"node-ca-prw2c\" (UID: \"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\") " pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.827908 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.842489 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.860140 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.864990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.865040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.865052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.865074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.865093 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.874605 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.893640 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.909510 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.922510 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.931676 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.951631 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.968171 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.968219 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.968229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.968258 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.968269 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:22Z","lastTransitionTime":"2025-12-09T08:44:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.969283 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.977619 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-prw2c" Dec 09 08:44:22 crc kubenswrapper[4786]: I1209 08:44:22.996221 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:22Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: W1209 08:44:23.017535 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf37b1b5c_1cdc_4a08_9ea3_03dad00b5797.slice/crio-3746a71ad8a5553747c12fbea25ff760157a012f69138c7e530bbb87e88a74ab WatchSource:0}: Error finding container 3746a71ad8a5553747c12fbea25ff760157a012f69138c7e530bbb87e88a74ab: Status 404 returned error can't find the container with id 3746a71ad8a5553747c12fbea25ff760157a012f69138c7e530bbb87e88a74ab Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.025231 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.045145 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.064309 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.072720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.072764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.072775 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.072794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.072804 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:23Z","lastTransitionTime":"2025-12-09T08:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.084531 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.116553 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.137036 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.154754 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.174274 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.176474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.176513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.176526 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.176548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.176560 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:23Z","lastTransitionTime":"2025-12-09T08:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.188103 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.188167 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.188223 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:23 crc kubenswrapper[4786]: E1209 08:44:23.188337 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:23 crc kubenswrapper[4786]: E1209 08:44:23.188251 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:23 crc kubenswrapper[4786]: E1209 08:44:23.188529 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.279330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.279382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.279393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.279413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.279449 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:23Z","lastTransitionTime":"2025-12-09T08:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.382978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.383034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.383050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.383074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.383089 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:23Z","lastTransitionTime":"2025-12-09T08:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.486106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.486167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.486182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.486215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.486233 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:23Z","lastTransitionTime":"2025-12-09T08:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.589062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.589125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.589137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.589157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.589172 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:23Z","lastTransitionTime":"2025-12-09T08:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.690745 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.690802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.690813 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.690830 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.690841 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:23Z","lastTransitionTime":"2025-12-09T08:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.694350 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.696922 4786 generic.go:334] "Generic (PLEG): container finished" podID="9e48cbd1-e567-45b7-902a-fb4a0daa2fd3" containerID="eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc" exitCode=0 Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.696995 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerDied","Data":"eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.698270 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-prw2c" event={"ID":"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797","Type":"ContainerStarted","Data":"a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.698305 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-prw2c" event={"ID":"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797","Type":"ContainerStarted","Data":"3746a71ad8a5553747c12fbea25ff760157a012f69138c7e530bbb87e88a74ab"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.710947 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.731348 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.751133 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.768915 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.782382 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.796566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.796629 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.796641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.796662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.796672 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:23Z","lastTransitionTime":"2025-12-09T08:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.799646 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.815537 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.828993 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.841855 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.857464 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.874008 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.887700 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.900851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.900887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.900896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.900910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.900918 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:23Z","lastTransitionTime":"2025-12-09T08:44:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.902385 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.914173 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.925238 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.947245 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.966876 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.982192 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:23 crc kubenswrapper[4786]: I1209 08:44:23.996443 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:23Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.004399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.004466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.004482 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.004505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.004518 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.010449 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.021618 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.041107 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.051943 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.068130 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.082412 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.100368 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.107442 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.107508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.107520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.107541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.107555 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.118662 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.132331 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.210510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.210569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.210580 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.210605 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.210619 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.313542 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.313582 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.313591 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.313608 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.313617 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.416996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.417079 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.417097 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.417120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.417131 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.520315 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.520378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.520395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.520417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.520459 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.622410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.622489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.622500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.622517 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.622528 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.708750 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerStarted","Data":"0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.724812 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.724856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.724866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.724886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.724901 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.725676 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.756643 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.779983 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.801691 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.821486 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.827267 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.827369 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.827397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.827473 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.827502 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.840513 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:24Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.852822 4786 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.921943 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.922112 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.922149 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:24 crc kubenswrapper[4786]: E1209 08:44:24.922292 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:24 crc kubenswrapper[4786]: E1209 08:44:24.922357 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:32.922340091 +0000 UTC m=+38.805961317 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:24 crc kubenswrapper[4786]: E1209 08:44:24.922496 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:44:32.922486555 +0000 UTC m=+38.806107781 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:44:24 crc kubenswrapper[4786]: E1209 08:44:24.922544 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:24 crc kubenswrapper[4786]: E1209 08:44:24.922572 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:32.922565257 +0000 UTC m=+38.806186483 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.930517 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.930569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.930584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.930605 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:24 crc kubenswrapper[4786]: I1209 08:44:24.930619 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:24Z","lastTransitionTime":"2025-12-09T08:44:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.022838 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.022974 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.023053 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.023081 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.023093 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.023157 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:33.023139724 +0000 UTC m=+38.906760950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.023161 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.023205 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.023219 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.023330 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:33.023312058 +0000 UTC m=+38.906933304 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.033311 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.033357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.033370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.033389 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.033400 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.135487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.135545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.135603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.135631 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.135643 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.187635 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.187664 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.187887 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.187914 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.188010 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:25 crc kubenswrapper[4786]: E1209 08:44:25.188094 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.238793 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.238861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.238871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.238890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.238903 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.342887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.342959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.342971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.342990 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.343001 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.445539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.445593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.445614 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.445640 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.445659 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.548942 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.549037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.549066 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.549132 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.549160 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.652701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.652782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.652805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.652832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.652851 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.755227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.755289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.755301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.755319 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.755330 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.860106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.860152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.860163 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.860181 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.860192 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.869181 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.885692 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.901407 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.915896 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.931551 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.946714 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.962346 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.964076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.964142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.964156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.964193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.964207 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:25Z","lastTransitionTime":"2025-12-09T08:44:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.978569 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:25 crc kubenswrapper[4786]: I1209 08:44:25.993803 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.002965 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.028724 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.057954 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.071659 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.072917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.072953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.072962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.072982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.072994 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:26Z","lastTransitionTime":"2025-12-09T08:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.084934 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.098027 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.124825 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.140703 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.154707 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.176767 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.176808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.176828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.176846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.176857 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:26Z","lastTransitionTime":"2025-12-09T08:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.178107 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.193820 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.214813 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.233633 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.280355 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.280414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.280441 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.280466 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.280481 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:26Z","lastTransitionTime":"2025-12-09T08:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.383808 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.384166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.384178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.384202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.384218 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:26Z","lastTransitionTime":"2025-12-09T08:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.486378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.486436 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.486450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.486467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.486476 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:26Z","lastTransitionTime":"2025-12-09T08:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.589396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.589455 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.589467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.589482 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.589492 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:26Z","lastTransitionTime":"2025-12-09T08:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.692485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.692515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.692524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.692538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.692546 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:26Z","lastTransitionTime":"2025-12-09T08:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.719862 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.721346 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.721454 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.728233 4786 generic.go:334] "Generic (PLEG): container finished" podID="9e48cbd1-e567-45b7-902a-fb4a0daa2fd3" containerID="0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959" exitCode=0 Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.728275 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerDied","Data":"0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.736893 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.754826 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.767886 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.768865 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.772690 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.788542 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.796293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.796356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.796375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.796399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.796418 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:26Z","lastTransitionTime":"2025-12-09T08:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.804939 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.818577 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.831166 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.844751 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.858699 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.871758 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.883493 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.898081 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.899773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.899828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.899838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.899856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.899868 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:26Z","lastTransitionTime":"2025-12-09T08:44:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.910173 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.928151 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.946518 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.971997 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.983469 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:26 crc kubenswrapper[4786]: I1209 08:44:26.995671 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:26Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.003387 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.003490 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.003505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.003521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.003532 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.008594 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.021471 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.034877 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.047270 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.062619 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.075565 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.087909 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.101902 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.105837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.105889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.105906 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.105929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.106007 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.120720 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.133836 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.187804 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.187822 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:27 crc kubenswrapper[4786]: E1209 08:44:27.187972 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:27 crc kubenswrapper[4786]: E1209 08:44:27.188015 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.187829 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:27 crc kubenswrapper[4786]: E1209 08:44:27.188083 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.209234 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.209285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.209303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.209326 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.209343 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.313801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.313871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.313895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.313928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.313951 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.416349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.416378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.416386 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.416399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.416409 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.519460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.519506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.519518 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.519534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.519544 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.622167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.622218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.622230 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.622249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.622260 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.724561 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.724630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.724645 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.724663 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.724675 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.750275 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerStarted","Data":"2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.750398 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.763555 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.778451 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.793883 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.805890 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.817409 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.826663 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.826701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.826713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.826729 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.826740 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.830973 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.842318 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.854134 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.870375 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.887792 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.900139 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.913814 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.928953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.928998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.929006 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.929020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.929072 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:27Z","lastTransitionTime":"2025-12-09T08:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.958811 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:27 crc kubenswrapper[4786]: I1209 08:44:27.980073 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:27Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.031649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.031693 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.031705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.031720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.031732 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.134003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.134040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.134051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.134065 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.134074 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.236593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.236642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.236655 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.236673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.236684 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.339467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.339535 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.339547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.339562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.339573 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.427520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.427631 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.427664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.427716 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.427740 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: E1209 08:44:28.450683 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:28Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.456352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.456443 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.456463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.456488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.456506 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: E1209 08:44:28.471383 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:28Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.477330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.477380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.477389 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.477404 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.477414 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: E1209 08:44:28.493360 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:28Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.499634 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.499710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.499734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.499762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.499780 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: E1209 08:44:28.515171 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:28Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.520610 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.520668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.520681 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.520704 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.520717 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: E1209 08:44:28.536537 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:28Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:28 crc kubenswrapper[4786]: E1209 08:44:28.536701 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.539208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.539294 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.539316 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.539346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.539365 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.641467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.641502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.641514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.641532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.641542 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.744065 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.744106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.744117 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.744149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.744164 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.754143 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.847516 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.847584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.847599 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.847654 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.847680 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.951071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.951129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.951142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.951166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:28 crc kubenswrapper[4786]: I1209 08:44:28.951184 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:28Z","lastTransitionTime":"2025-12-09T08:44:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.054866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.054925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.054937 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.054958 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.054973 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.159499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.159598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.159617 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.159655 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.159685 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.187918 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.187928 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.187931 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:29 crc kubenswrapper[4786]: E1209 08:44:29.188067 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:29 crc kubenswrapper[4786]: E1209 08:44:29.188264 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:29 crc kubenswrapper[4786]: E1209 08:44:29.188615 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.262728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.262832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.262845 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.262864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.262881 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.367437 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.367487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.367499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.367522 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.367535 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.470965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.471011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.471023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.471041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.471053 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.573670 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.573739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.573753 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.573777 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.573792 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.676222 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.676295 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.676314 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.676341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.676360 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.764322 4786 generic.go:334] "Generic (PLEG): container finished" podID="9e48cbd1-e567-45b7-902a-fb4a0daa2fd3" containerID="2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6" exitCode=0 Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.764392 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerDied","Data":"2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.775766 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr"] Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.776405 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.782592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.782631 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.782643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.782662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.782661 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.782677 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.782749 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.828757 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.871044 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.885806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.885852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.885871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.885890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.885908 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.891926 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.898193 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/373c7fdc-f660-4323-a503-3e7a0dedb865-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.898244 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnjzh\" (UniqueName: \"kubernetes.io/projected/373c7fdc-f660-4323-a503-3e7a0dedb865-kube-api-access-pnjzh\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.898286 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/373c7fdc-f660-4323-a503-3e7a0dedb865-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.898306 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/373c7fdc-f660-4323-a503-3e7a0dedb865-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.911265 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.928487 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.943603 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.959356 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.972804 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.985777 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.989637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.989667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.989677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.989693 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.989740 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:29Z","lastTransitionTime":"2025-12-09T08:44:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.999209 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/373c7fdc-f660-4323-a503-3e7a0dedb865-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.999259 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnjzh\" (UniqueName: \"kubernetes.io/projected/373c7fdc-f660-4323-a503-3e7a0dedb865-kube-api-access-pnjzh\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.999297 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/373c7fdc-f660-4323-a503-3e7a0dedb865-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:29 crc kubenswrapper[4786]: I1209 08:44:29.999321 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/373c7fdc-f660-4323-a503-3e7a0dedb865-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.000082 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/373c7fdc-f660-4323-a503-3e7a0dedb865-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.000519 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/373c7fdc-f660-4323-a503-3e7a0dedb865-env-overrides\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.001598 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:29Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.009218 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/373c7fdc-f660-4323-a503-3e7a0dedb865-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.016205 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.019711 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnjzh\" (UniqueName: \"kubernetes.io/projected/373c7fdc-f660-4323-a503-3e7a0dedb865-kube-api-access-pnjzh\") pod \"ovnkube-control-plane-749d76644c-jwjmr\" (UID: \"373c7fdc-f660-4323-a503-3e7a0dedb865\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.030150 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.046412 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.059652 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.073926 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.085314 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.091732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.091764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.091775 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.091797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.091810 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:30Z","lastTransitionTime":"2025-12-09T08:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.099444 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.109549 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.119498 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.129924 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.141620 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: W1209 08:44:30.142287 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod373c7fdc_f660_4323_a503_3e7a0dedb865.slice/crio-d7d3c0419d525db95f5f57a30cad34f8a1d6e9e4aa5b78863a9603c03d6e5374 WatchSource:0}: Error finding container d7d3c0419d525db95f5f57a30cad34f8a1d6e9e4aa5b78863a9603c03d6e5374: Status 404 returned error can't find the container with id d7d3c0419d525db95f5f57a30cad34f8a1d6e9e4aa5b78863a9603c03d6e5374 Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.158981 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.177097 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.192345 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.193839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.193885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.193897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.193915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.193926 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:30Z","lastTransitionTime":"2025-12-09T08:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.207634 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.219598 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.230962 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.243604 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.253756 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.267524 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.296649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.296695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.296705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.296720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.296730 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:30Z","lastTransitionTime":"2025-12-09T08:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.399075 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.399127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.399139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.399157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.399168 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:30Z","lastTransitionTime":"2025-12-09T08:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.502608 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.502684 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.502707 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.502739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.502762 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:30Z","lastTransitionTime":"2025-12-09T08:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.605303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.605368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.605381 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.605403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.605418 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:30Z","lastTransitionTime":"2025-12-09T08:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.708579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.708669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.708689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.708719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.708737 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:30Z","lastTransitionTime":"2025-12-09T08:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.769681 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" event={"ID":"373c7fdc-f660-4323-a503-3e7a0dedb865","Type":"ContainerStarted","Data":"d7d3c0419d525db95f5f57a30cad34f8a1d6e9e4aa5b78863a9603c03d6e5374"} Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.810293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.810334 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.810345 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.810363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.810374 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:30Z","lastTransitionTime":"2025-12-09T08:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.913301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.913339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.913348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.913364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:30 crc kubenswrapper[4786]: I1209 08:44:30.913397 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:30Z","lastTransitionTime":"2025-12-09T08:44:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.015989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.016368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.016587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.016754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.016870 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.119269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.119300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.119309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.119323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.119332 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.187790 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:31 crc kubenswrapper[4786]: E1209 08:44:31.187953 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.188316 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:31 crc kubenswrapper[4786]: E1209 08:44:31.188389 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.190383 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:31 crc kubenswrapper[4786]: E1209 08:44:31.190625 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.221872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.221918 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.221930 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.221956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.221967 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.354708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.354763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.354775 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.354794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.354806 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.457243 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.457278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.457289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.457307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.457321 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.559448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.559486 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.559497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.559515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.559527 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.662185 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.662232 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.662241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.662256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.662265 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.729934 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-v58s4"] Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.748863 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:31 crc kubenswrapper[4786]: E1209 08:44:31.748982 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.766787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.766837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.766849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.766868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.766880 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.775056 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.782896 4786 generic.go:334] "Generic (PLEG): container finished" podID="9e48cbd1-e567-45b7-902a-fb4a0daa2fd3" containerID="0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850" exitCode=0 Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.782962 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerDied","Data":"0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.785034 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" event={"ID":"373c7fdc-f660-4323-a503-3e7a0dedb865","Type":"ContainerStarted","Data":"de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.785081 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" event={"ID":"373c7fdc-f660-4323-a503-3e7a0dedb865","Type":"ContainerStarted","Data":"6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.792993 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.805209 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.824511 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.839094 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.851120 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.851314 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhg5l\" (UniqueName: \"kubernetes.io/projected/e6f68306-ac39-4d61-8c27-12d69cc49a4f-kube-api-access-lhg5l\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.852487 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.866276 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.870835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.870970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.871055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.871149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.871227 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.881827 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.899701 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.912359 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.925823 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.941205 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.952610 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.952702 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhg5l\" (UniqueName: \"kubernetes.io/projected/e6f68306-ac39-4d61-8c27-12d69cc49a4f-kube-api-access-lhg5l\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:31 crc kubenswrapper[4786]: E1209 08:44:31.953506 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:31 crc kubenswrapper[4786]: E1209 08:44:31.953558 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs podName:e6f68306-ac39-4d61-8c27-12d69cc49a4f nodeName:}" failed. No retries permitted until 2025-12-09 08:44:32.453541099 +0000 UTC m=+38.337162325 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs") pod "network-metrics-daemon-v58s4" (UID: "e6f68306-ac39-4d61-8c27-12d69cc49a4f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.954264 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.973369 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.973757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.973767 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.973802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.973813 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:31Z","lastTransitionTime":"2025-12-09T08:44:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.976409 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhg5l\" (UniqueName: \"kubernetes.io/projected/e6f68306-ac39-4d61-8c27-12d69cc49a4f-kube-api-access-lhg5l\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.977394 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:31 crc kubenswrapper[4786]: I1209 08:44:31.994160 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:31Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.008319 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.020459 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.039200 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.056587 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.072863 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.086049 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.086107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.086116 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.086149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.086158 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:32Z","lastTransitionTime":"2025-12-09T08:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.089344 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.104802 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.117261 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.129010 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.141281 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.162614 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.176574 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.189369 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.189445 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.189461 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.189482 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.189493 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:32Z","lastTransitionTime":"2025-12-09T08:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.190606 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.202606 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.214742 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.226869 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.242751 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.291729 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.291966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.292029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.292135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.292196 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:32Z","lastTransitionTime":"2025-12-09T08:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.394798 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.394854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.394874 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.394918 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.394933 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:32Z","lastTransitionTime":"2025-12-09T08:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.457339 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:32 crc kubenswrapper[4786]: E1209 08:44:32.457624 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:32 crc kubenswrapper[4786]: E1209 08:44:32.457797 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs podName:e6f68306-ac39-4d61-8c27-12d69cc49a4f nodeName:}" failed. No retries permitted until 2025-12-09 08:44:33.457777846 +0000 UTC m=+39.341399072 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs") pod "network-metrics-daemon-v58s4" (UID: "e6f68306-ac39-4d61-8c27-12d69cc49a4f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.497813 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.497852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.497861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.497877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.497886 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:32Z","lastTransitionTime":"2025-12-09T08:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.601778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.601865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.601882 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.601904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.601919 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:32Z","lastTransitionTime":"2025-12-09T08:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.704450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.704489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.704499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.704515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.704526 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:32Z","lastTransitionTime":"2025-12-09T08:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.791405 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/0.log" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.793508 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6" exitCode=1 Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.793561 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.794212 4786 scope.go:117] "RemoveContainer" containerID="8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.800381 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" event={"ID":"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3","Type":"ContainerStarted","Data":"379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.809029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.809080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.809096 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.809122 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.809142 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:32Z","lastTransitionTime":"2025-12-09T08:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.810264 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.830185 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"message\\\":\\\"ping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.128839 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.129048 6263 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129473 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 08:44:32.129521 6263 factory.go:656] Stopping watch factory\\\\nI1209 08:44:32.129161 6263 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 08:44:32.129542 6263 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 08:44:32.129544 6263 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 08:44:32.129190 6263 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 08:44:32.129208 6263 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 08:44:32.129259 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129164 6263 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 08:44:32.129584 6263 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.842226 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.857708 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.872092 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.886396 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.899198 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.911532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.911586 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.911600 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.911617 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.911629 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:32Z","lastTransitionTime":"2025-12-09T08:44:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.912517 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.925230 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.937962 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.952601 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.965549 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.981705 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.990990 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:32 crc kubenswrapper[4786]: E1209 08:44:32.991206 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:44:48.991148516 +0000 UTC m=+54.874769742 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.991483 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.991691 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:32 crc kubenswrapper[4786]: E1209 08:44:32.991794 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:32 crc kubenswrapper[4786]: E1209 08:44:32.991861 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:48.991843583 +0000 UTC m=+54.875464819 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:32 crc kubenswrapper[4786]: E1209 08:44:32.992381 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:32 crc kubenswrapper[4786]: E1209 08:44:32.992471 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:48.992455808 +0000 UTC m=+54.876077124 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:32 crc kubenswrapper[4786]: I1209 08:44:32.995875 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:32Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.012715 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.015679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.015728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.015740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.015760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.015774 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.028702 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.044352 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.056915 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.068000 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.092858 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.092940 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.092875 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"message\\\":\\\"ping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.128839 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.129048 6263 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129473 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 08:44:32.129521 6263 factory.go:656] Stopping watch factory\\\\nI1209 08:44:32.129161 6263 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 08:44:32.129542 6263 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 08:44:32.129544 6263 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 08:44:32.129190 6263 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 08:44:32.129208 6263 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 08:44:32.129259 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129164 6263 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 08:44:32.129584 6263 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.093072 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.093100 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.093112 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.093117 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.093158 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.093178 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.093186 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:49.093171508 +0000 UTC m=+54.976792734 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.093260 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 08:44:49.09324656 +0000 UTC m=+54.976867876 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.107021 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.118695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.118735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.118745 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.118759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.118773 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.123503 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.137040 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.148760 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.165939 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.185241 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.187612 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.187651 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.187675 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.187861 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.187991 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.188118 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.188880 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.189569 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.205836 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.218155 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.221194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.221474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.221641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.221797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.221974 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.234770 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.247651 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.263228 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.326262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.326350 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.326368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.326411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.326477 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.425928 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.430051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.430127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.430145 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.430175 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.430193 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.508546 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.508718 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:33 crc kubenswrapper[4786]: E1209 08:44:33.508790 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs podName:e6f68306-ac39-4d61-8c27-12d69cc49a4f nodeName:}" failed. No retries permitted until 2025-12-09 08:44:35.508775811 +0000 UTC m=+41.392397037 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs") pod "network-metrics-daemon-v58s4" (UID: "e6f68306-ac39-4d61-8c27-12d69cc49a4f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.533758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.533815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.533831 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.533859 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.533875 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.637669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.637760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.637773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.637794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.637808 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.740127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.740171 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.740185 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.740203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.740215 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.805215 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/0.log" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.808590 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.808727 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.826930 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.842943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.842998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.843011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.843033 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.843045 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.843888 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.863637 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"message\\\":\\\"ping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.128839 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.129048 6263 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129473 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 08:44:32.129521 6263 factory.go:656] Stopping watch factory\\\\nI1209 08:44:32.129161 6263 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 08:44:32.129542 6263 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 08:44:32.129544 6263 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 08:44:32.129190 6263 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 08:44:32.129208 6263 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 08:44:32.129259 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129164 6263 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 08:44:32.129584 6263 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.876324 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.892348 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.904592 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.918902 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.933484 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.945309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.945361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.945373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.945474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.945506 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:33Z","lastTransitionTime":"2025-12-09T08:44:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.948462 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.962075 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.980696 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:33 crc kubenswrapper[4786]: I1209 08:44:33.993063 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.008026 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.019583 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.031358 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.044908 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.047474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.047501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.047510 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.047524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.047533 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.150385 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.150461 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.150476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.150494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.150506 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.254541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.254604 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.254621 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.254647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.254664 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.350596 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.357244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.357325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.357338 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.357360 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.357372 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.460301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.460350 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.460363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.460385 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.460462 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.562960 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.562997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.563013 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.563038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.563049 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.666768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.666831 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.666848 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.666873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.666898 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.770183 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.770419 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.770462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.770478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.770489 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.814282 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/1.log" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.815380 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/0.log" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.818823 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4" exitCode=1 Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.818904 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.818975 4786 scope.go:117] "RemoveContainer" containerID="8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.820071 4786 scope.go:117] "RemoveContainer" containerID="cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4" Dec 09 08:44:34 crc kubenswrapper[4786]: E1209 08:44:34.820530 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.838834 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.851853 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.869630 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"message\\\":\\\"ping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.128839 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.129048 6263 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129473 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 08:44:32.129521 6263 factory.go:656] Stopping watch factory\\\\nI1209 08:44:32.129161 6263 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 08:44:32.129542 6263 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 08:44:32.129544 6263 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 08:44:32.129190 6263 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 08:44:32.129208 6263 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 08:44:32.129259 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129164 6263 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 08:44:32.129584 6263 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:34Z\\\",\\\"message\\\":\\\"d openshift-multus/network-metrics-daemon-v58s4\\\\nI1209 08:44:34.176633 6522 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 08:44:34.176671 6522 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.874804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.874830 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.874838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.874854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.874864 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.884683 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.898344 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.909998 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.920822 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.929648 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.940291 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.951378 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.962521 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.973481 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.976871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.976912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.976925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.976945 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.976959 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:34Z","lastTransitionTime":"2025-12-09T08:44:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.982763 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:34 crc kubenswrapper[4786]: I1209 08:44:34.994033 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:34Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.004317 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.015867 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.079245 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.079588 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.079658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.079728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.079804 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:35Z","lastTransitionTime":"2025-12-09T08:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.181741 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.181781 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.181791 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.181805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.181815 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:35Z","lastTransitionTime":"2025-12-09T08:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.187079 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.187079 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.187320 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:35 crc kubenswrapper[4786]: E1209 08:44:35.187455 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.187479 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:35 crc kubenswrapper[4786]: E1209 08:44:35.187668 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:35 crc kubenswrapper[4786]: E1209 08:44:35.187796 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:35 crc kubenswrapper[4786]: E1209 08:44:35.187882 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.203285 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.214595 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.224316 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.239878 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.253940 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.266009 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.284590 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.284681 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.284712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.284743 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.284766 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:35Z","lastTransitionTime":"2025-12-09T08:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.284591 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.295869 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.308028 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.320645 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.332127 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.343216 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.352990 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.374836 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8671a7441f6025ed84963005731973ef4ff4215eadfddd1d856aa80a5a059dd6\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"message\\\":\\\"ping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.128839 6263 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:44:32.129048 6263 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129473 6263 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1209 08:44:32.129521 6263 factory.go:656] Stopping watch factory\\\\nI1209 08:44:32.129161 6263 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1209 08:44:32.129542 6263 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1209 08:44:32.129544 6263 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1209 08:44:32.129190 6263 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1209 08:44:32.129208 6263 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1209 08:44:32.129259 6263 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:44:32.129164 6263 handler.go:208] Removed *v1.Node event handler 7\\\\nI1209 08:44:32.129584 6263 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:25Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:34Z\\\",\\\"message\\\":\\\"d openshift-multus/network-metrics-daemon-v58s4\\\\nI1209 08:44:34.176633 6522 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 08:44:34.176671 6522 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.386956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.387015 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.387025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.387040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.387050 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:35Z","lastTransitionTime":"2025-12-09T08:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.387967 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.401669 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.489981 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.490028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.490039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.490057 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.490069 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:35Z","lastTransitionTime":"2025-12-09T08:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.526087 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:35 crc kubenswrapper[4786]: E1209 08:44:35.526314 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:35 crc kubenswrapper[4786]: E1209 08:44:35.526409 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs podName:e6f68306-ac39-4d61-8c27-12d69cc49a4f nodeName:}" failed. No retries permitted until 2025-12-09 08:44:39.526383047 +0000 UTC m=+45.410004313 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs") pod "network-metrics-daemon-v58s4" (UID: "e6f68306-ac39-4d61-8c27-12d69cc49a4f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.592168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.592216 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.592225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.592239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.592250 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:35Z","lastTransitionTime":"2025-12-09T08:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.694890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.694950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.694958 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.694974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.694985 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:35Z","lastTransitionTime":"2025-12-09T08:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.797616 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.797673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.797685 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.797707 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.797719 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:35Z","lastTransitionTime":"2025-12-09T08:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.824414 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/1.log" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.829629 4786 scope.go:117] "RemoveContainer" containerID="cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4" Dec 09 08:44:35 crc kubenswrapper[4786]: E1209 08:44:35.830078 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.844800 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.857442 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.869823 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.880365 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.891553 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.899759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.899816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.899829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.899844 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.899854 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:35Z","lastTransitionTime":"2025-12-09T08:44:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.903314 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.913289 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.923399 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.939151 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.959855 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.973189 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.984722 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:35 crc kubenswrapper[4786]: I1209 08:44:35.998840 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.002514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.002580 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.002604 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.002634 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.002656 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.008221 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:36Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.028992 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:34Z\\\",\\\"message\\\":\\\"d openshift-multus/network-metrics-daemon-v58s4\\\\nI1209 08:44:34.176633 6522 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 08:44:34.176671 6522 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:36Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.040412 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:36Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.105780 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.105849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.105861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.105887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.105904 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.209101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.209137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.209145 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.209160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.209170 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.311806 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.311861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.311876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.311895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.311908 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.414942 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.414985 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.414997 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.415017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.415033 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.517873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.517921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.517932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.517948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.517959 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.621468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.621508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.621519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.621537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.621549 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.725218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.725277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.725295 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.725319 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.725337 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.831563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.831688 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.831710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.831749 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.831778 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.935627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.935684 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.935697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.935715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:36 crc kubenswrapper[4786]: I1209 08:44:36.935727 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:36Z","lastTransitionTime":"2025-12-09T08:44:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.038186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.038250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.038262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.038282 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.038293 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.140560 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.140609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.140620 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.140635 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.140649 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.189634 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:37 crc kubenswrapper[4786]: E1209 08:44:37.189804 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.190294 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:37 crc kubenswrapper[4786]: E1209 08:44:37.190353 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.190997 4786 scope.go:117] "RemoveContainer" containerID="ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.191376 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:37 crc kubenswrapper[4786]: E1209 08:44:37.191470 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.191623 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:37 crc kubenswrapper[4786]: E1209 08:44:37.191687 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.247766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.247807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.247821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.247838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.247847 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.350862 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.350907 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.350916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.350929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.350937 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.453171 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.453725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.453738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.453758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.453772 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.556677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.556732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.556744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.556761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.556771 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.658939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.659240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.659348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.659436 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.659513 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.762913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.763370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.763531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.763658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.763755 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.839850 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.841150 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.841449 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.856029 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.866875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.866935 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.866949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.866968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.866982 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.876363 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:34Z\\\",\\\"message\\\":\\\"d openshift-multus/network-metrics-daemon-v58s4\\\\nI1209 08:44:34.176633 6522 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 08:44:34.176671 6522 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.890927 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.907290 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.923114 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.937505 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.952745 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.969203 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.969841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.969890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.969901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.969921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.969932 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:37Z","lastTransitionTime":"2025-12-09T08:44:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.983491 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:37 crc kubenswrapper[4786]: I1209 08:44:37.998791 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:37Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.010254 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.023390 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.035513 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.052321 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.072644 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.073953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.074003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.074023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.074046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.074060 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.089323 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.177716 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.177789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.177804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.177824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.177835 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.222158 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:38 crc kubenswrapper[4786]: E1209 08:44:38.222362 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.280721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.280769 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.280782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.280801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.280813 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.384565 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.384608 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.384618 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.384634 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.384645 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.487843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.487907 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.487919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.487940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.487953 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.591470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.591547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.591561 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.591586 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.591600 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.694449 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.694531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.694548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.694577 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.694596 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.796979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.797017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.797029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.797047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.797059 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.844091 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.844178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.844217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.844238 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.844252 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: E1209 08:44:38.859902 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.865379 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.865479 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.865497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.865520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.865533 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: E1209 08:44:38.887779 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.891970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.892014 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.892028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.892046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.892061 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: E1209 08:44:38.905217 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.909037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.909083 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.909095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.909113 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.909126 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: E1209 08:44:38.923538 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.928718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.928798 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.928821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.928846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.928864 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:38 crc kubenswrapper[4786]: E1209 08:44:38.949502 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:38Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:38Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:38 crc kubenswrapper[4786]: E1209 08:44:38.949687 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.951959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.952028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.952051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.952090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:38 crc kubenswrapper[4786]: I1209 08:44:38.952112 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:38Z","lastTransitionTime":"2025-12-09T08:44:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.055311 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.055380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.055401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.055470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.055495 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.157717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.157819 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.157836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.157857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.157868 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.187746 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.187753 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.187767 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:39 crc kubenswrapper[4786]: E1209 08:44:39.187914 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:39 crc kubenswrapper[4786]: E1209 08:44:39.188064 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:39 crc kubenswrapper[4786]: E1209 08:44:39.188145 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.260791 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.260863 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.260885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.260915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.260938 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.363301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.363373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.363397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.363459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.363482 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.466664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.466743 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.466766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.466801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.466823 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.562186 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:39 crc kubenswrapper[4786]: E1209 08:44:39.562411 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:39 crc kubenswrapper[4786]: E1209 08:44:39.562607 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs podName:e6f68306-ac39-4d61-8c27-12d69cc49a4f nodeName:}" failed. No retries permitted until 2025-12-09 08:44:47.562574014 +0000 UTC m=+53.446195280 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs") pod "network-metrics-daemon-v58s4" (UID: "e6f68306-ac39-4d61-8c27-12d69cc49a4f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.570168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.570253 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.570272 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.570298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.570316 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.672810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.672860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.672875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.672898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.672911 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.776699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.776778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.776799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.776826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.776843 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.879649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.879680 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.879693 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.879710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.879724 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.983029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.983078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.983090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.983108 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:39 crc kubenswrapper[4786]: I1209 08:44:39.983123 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:39Z","lastTransitionTime":"2025-12-09T08:44:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.087014 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.087071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.087084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.087103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.087116 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:40Z","lastTransitionTime":"2025-12-09T08:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.187380 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:40 crc kubenswrapper[4786]: E1209 08:44:40.187680 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.190770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.190877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.190898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.191179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.191235 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:40Z","lastTransitionTime":"2025-12-09T08:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.294584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.294636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.294649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.294677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.294693 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:40Z","lastTransitionTime":"2025-12-09T08:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.397773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.397833 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.397846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.397865 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.397876 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:40Z","lastTransitionTime":"2025-12-09T08:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.501030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.501124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.501154 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.501191 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.501216 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:40Z","lastTransitionTime":"2025-12-09T08:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.604354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.604455 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.604475 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.604499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.604517 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:40Z","lastTransitionTime":"2025-12-09T08:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.707340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.707396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.707413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.707470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.707489 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:40Z","lastTransitionTime":"2025-12-09T08:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.810594 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.810672 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.810696 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.810727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.810751 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:40Z","lastTransitionTime":"2025-12-09T08:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.914101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.914160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.914177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.914207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:40 crc kubenswrapper[4786]: I1209 08:44:40.914229 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:40Z","lastTransitionTime":"2025-12-09T08:44:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.018064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.018127 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.018150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.018182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.018203 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.132248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.132359 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.132378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.132403 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.132421 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.187263 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.187361 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:41 crc kubenswrapper[4786]: E1209 08:44:41.187529 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.187649 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:41 crc kubenswrapper[4786]: E1209 08:44:41.187855 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:41 crc kubenswrapper[4786]: E1209 08:44:41.188030 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.235841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.235905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.235922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.235946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.235964 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.339235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.339284 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.339337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.339360 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.339394 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.442412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.442452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.442460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.442474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.442483 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.546492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.546581 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.546597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.546622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.546637 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.649606 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.650009 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.650122 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.650210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.650287 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.753705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.753757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.753770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.753792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.753807 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.856622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.856653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.856664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.856678 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.856686 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.960472 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.960521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.960530 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.960549 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:41 crc kubenswrapper[4786]: I1209 08:44:41.960558 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:41Z","lastTransitionTime":"2025-12-09T08:44:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.063818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.063870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.063882 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.063900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.063911 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.166374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.166419 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.166450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.166470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.166483 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.187324 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:42 crc kubenswrapper[4786]: E1209 08:44:42.187518 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.270164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.270206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.270217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.270254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.270267 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.372668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.372701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.372711 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.372727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.372738 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.476725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.476823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.476843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.476870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.476890 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.580073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.580108 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.580117 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.580131 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.580141 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.683178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.683239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.683255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.683279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.683294 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.786944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.787005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.787022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.787047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.787064 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.889554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.889596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.889605 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.889627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.889638 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.993203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.993304 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.993327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.993358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:42 crc kubenswrapper[4786]: I1209 08:44:42.993380 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:42Z","lastTransitionTime":"2025-12-09T08:44:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.096603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.096668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.096697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.096722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.096738 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:43Z","lastTransitionTime":"2025-12-09T08:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.187405 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:43 crc kubenswrapper[4786]: E1209 08:44:43.187568 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.187575 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:43 crc kubenswrapper[4786]: E1209 08:44:43.187786 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.187942 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:43 crc kubenswrapper[4786]: E1209 08:44:43.188045 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.199080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.199139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.199153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.199173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.199187 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:43Z","lastTransitionTime":"2025-12-09T08:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.302519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.302588 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.302604 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.302632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.302647 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:43Z","lastTransitionTime":"2025-12-09T08:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.406288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.406355 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.406372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.406396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.406414 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:43Z","lastTransitionTime":"2025-12-09T08:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.509119 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.509163 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.509173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.509190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.509202 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:43Z","lastTransitionTime":"2025-12-09T08:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.611826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.611879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.611892 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.611911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.611924 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:43Z","lastTransitionTime":"2025-12-09T08:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.720261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.720353 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.720372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.720402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.720524 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:43Z","lastTransitionTime":"2025-12-09T08:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.797409 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.809882 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.810657 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.823287 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.823330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.823342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.823363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.823377 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:43Z","lastTransitionTime":"2025-12-09T08:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.834902 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:34Z\\\",\\\"message\\\":\\\"d openshift-multus/network-metrics-daemon-v58s4\\\\nI1209 08:44:34.176633 6522 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 08:44:34.176671 6522 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.849958 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.865533 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.880978 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.894291 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.908080 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.922139 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.925996 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.926040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.926052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.926072 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.926085 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:43Z","lastTransitionTime":"2025-12-09T08:44:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.937925 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.951286 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.965523 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.981569 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:43 crc kubenswrapper[4786]: I1209 08:44:43.994790 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:43Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.008377 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:44Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.021390 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:44Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.028742 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.028788 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.028801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.028820 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.028832 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.036893 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:44Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.131761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.131816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.131832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.131855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.131870 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.187884 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:44 crc kubenswrapper[4786]: E1209 08:44:44.188034 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.235392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.235473 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.235489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.235511 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.235525 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.339021 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.339077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.339099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.339124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.339146 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.442887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.442969 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.442991 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.443019 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.443037 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.545513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.545560 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.545571 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.545591 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.545602 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.647784 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.647864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.647889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.647920 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.647943 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.750627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.750712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.750738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.750761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.750778 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.854008 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.854090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.854123 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.854157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.854178 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.957595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.957664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.957676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.957699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:44 crc kubenswrapper[4786]: I1209 08:44:44.957713 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:44Z","lastTransitionTime":"2025-12-09T08:44:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.060377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.060411 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.060439 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.060453 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.060464 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.164349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.164470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.164496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.164529 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.164569 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.187380 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.187516 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:45 crc kubenswrapper[4786]: E1209 08:44:45.187841 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.187986 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:45 crc kubenswrapper[4786]: E1209 08:44:45.188203 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:45 crc kubenswrapper[4786]: E1209 08:44:45.188334 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.203815 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.217190 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.235416 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.249695 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.262341 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.267797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.268001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.268016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.268033 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.268043 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.281124 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.295131 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.309092 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.335303 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:34Z\\\",\\\"message\\\":\\\"d openshift-multus/network-metrics-daemon-v58s4\\\\nI1209 08:44:34.176633 6522 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 08:44:34.176671 6522 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.348464 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.363042 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.375527 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.375566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.375577 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.375592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.375603 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.380469 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.393656 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.405571 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.417027 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.429049 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.442798 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:45Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.478103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.478160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.478176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.478199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.478213 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.581167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.581225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.581242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.581266 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.581280 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.684070 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.684134 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.684152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.684176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.684193 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.787627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.787675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.787685 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.787703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.787714 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.890774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.890821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.890830 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.890846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.890855 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.995004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.995063 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.995076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.995094 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:45 crc kubenswrapper[4786]: I1209 08:44:45.995107 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:45Z","lastTransitionTime":"2025-12-09T08:44:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.098869 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.098963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.098987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.099018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.099042 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:46Z","lastTransitionTime":"2025-12-09T08:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.188295 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:46 crc kubenswrapper[4786]: E1209 08:44:46.188768 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.202377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.202652 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.202734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.202804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.202866 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:46Z","lastTransitionTime":"2025-12-09T08:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.305899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.305963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.305977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.305999 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.306013 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:46Z","lastTransitionTime":"2025-12-09T08:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.409278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.409356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.409367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.409386 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.409398 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:46Z","lastTransitionTime":"2025-12-09T08:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.513242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.513330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.513342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.513364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.513377 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:46Z","lastTransitionTime":"2025-12-09T08:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.617337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.617418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.617482 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.617513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.617537 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:46Z","lastTransitionTime":"2025-12-09T08:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.720483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.720541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.720555 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.720579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.720594 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:46Z","lastTransitionTime":"2025-12-09T08:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.824217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.824283 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.824298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.824331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.824350 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:46Z","lastTransitionTime":"2025-12-09T08:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.927809 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.927959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.928047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.928144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:46 crc kubenswrapper[4786]: I1209 08:44:46.928244 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:46Z","lastTransitionTime":"2025-12-09T08:44:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.031207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.031261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.031277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.031298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.031312 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.133407 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.133464 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.133472 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.133487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.133500 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.187863 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.187942 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.187865 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:47 crc kubenswrapper[4786]: E1209 08:44:47.188141 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:47 crc kubenswrapper[4786]: E1209 08:44:47.188306 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:47 crc kubenswrapper[4786]: E1209 08:44:47.188522 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.237569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.237632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.237642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.237664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.237680 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.341114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.341185 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.341196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.341215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.341228 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.444456 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.444508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.444521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.444724 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.444736 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.547349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.547408 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.547459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.547498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.547517 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.562695 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:47 crc kubenswrapper[4786]: E1209 08:44:47.562885 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:47 crc kubenswrapper[4786]: E1209 08:44:47.562942 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs podName:e6f68306-ac39-4d61-8c27-12d69cc49a4f nodeName:}" failed. No retries permitted until 2025-12-09 08:45:03.562924287 +0000 UTC m=+69.446545513 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs") pod "network-metrics-daemon-v58s4" (UID: "e6f68306-ac39-4d61-8c27-12d69cc49a4f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.675026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.675151 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.675167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.675194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.675210 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.778722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.778791 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.778807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.778829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.778848 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.882850 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.882946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.882965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.883041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.883059 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.987300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.987345 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.987356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.987373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:47 crc kubenswrapper[4786]: I1209 08:44:47.987383 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:47Z","lastTransitionTime":"2025-12-09T08:44:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.090031 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.090105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.090118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.090139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.090151 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.187513 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:48 crc kubenswrapper[4786]: E1209 08:44:48.187689 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.193942 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.194001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.194019 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.194039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.194052 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.305567 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.305650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.305679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.305710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.305735 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.409103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.409185 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.409201 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.409231 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.409251 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.512923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.513022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.513086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.513115 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.513134 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.616205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.616269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.616287 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.616331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.616350 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.719152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.719193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.719202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.719218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.719227 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.822292 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.822356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.822397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.822478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.822516 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.925380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.925462 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.925473 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.925493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.925508 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.958827 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.958908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.958923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.958940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.958952 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: E1209 08:44:48.975695 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:48Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.981304 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.981394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.981409 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.981464 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:48 crc kubenswrapper[4786]: I1209 08:44:48.981481 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:48Z","lastTransitionTime":"2025-12-09T08:44:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:48 crc kubenswrapper[4786]: E1209 08:44:48.998325 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:48Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:48Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.002855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.002913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.002926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.002952 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.002966 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.024242 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.029945 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.030020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.030035 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.030057 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.030072 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.047261 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.052849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.052943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.052975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.053018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.053037 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.068849 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.068995 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.070873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.070913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.070922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.070943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.070952 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.086402 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.086577 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:45:21.086548406 +0000 UTC m=+86.970169662 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.086659 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.086720 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.086753 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.086822 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:45:21.086806662 +0000 UTC m=+86.970427908 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.086852 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.086886 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:45:21.086876603 +0000 UTC m=+86.970497829 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.175137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.175229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.175248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.175323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.175340 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.187066 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.187149 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.187278 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.187407 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.187488 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.187663 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.187682 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.187900 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.187926 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.187940 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.188007 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 08:45:21.187981173 +0000 UTC m=+87.071602579 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.188257 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.188312 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.188333 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:49 crc kubenswrapper[4786]: E1209 08:44:49.188459 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 08:45:21.188406894 +0000 UTC m=+87.072028300 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.187974 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.278802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.278891 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.278903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.278929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.278942 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.381884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.381940 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.381953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.381974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.381990 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.484524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.484584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.484596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.484614 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.484629 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.616943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.617010 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.617021 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.617041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.617052 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.719877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.719917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.719928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.719944 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.719957 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.758989 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.777126 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.794463 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.815725 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.823040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.823105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.823116 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.823138 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.823151 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.831960 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.848467 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.878470 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:34Z\\\",\\\"message\\\":\\\"d openshift-multus/network-metrics-daemon-v58s4\\\\nI1209 08:44:34.176633 6522 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 08:44:34.176671 6522 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.893067 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.909557 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.925843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.925880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.925889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.925902 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.925926 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:49Z","lastTransitionTime":"2025-12-09T08:44:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.926245 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.942405 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.954569 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.971983 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.985108 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:49 crc kubenswrapper[4786]: I1209 08:44:49.997550 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:49Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.011126 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:50Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.026979 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:50Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.028642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.028696 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.028708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.028725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.028734 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.039990 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:50Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.137650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.137714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.137732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.137757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.137774 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.188245 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.188452 4786 scope.go:117] "RemoveContainer" containerID="cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4" Dec 09 08:44:50 crc kubenswrapper[4786]: E1209 08:44:50.188629 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.240737 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.241248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.241269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.241295 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.241310 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.344578 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.344650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.344670 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.344696 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.344714 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.448587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.448653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.448682 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.448714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.448738 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.553461 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.553515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.553530 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.553547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.553559 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.656086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.656151 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.656165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.656186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.656200 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.759242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.759303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.759313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.759340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.759351 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.864855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.864922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.864943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.864976 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.864993 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.895790 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/1.log" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.899838 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.901275 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.915723 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:50Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.929202 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:50Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.949176 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:50Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.963990 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:50Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.968152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.968235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.968250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.968329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.968343 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:50Z","lastTransitionTime":"2025-12-09T08:44:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.978167 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:50Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:50 crc kubenswrapper[4786]: I1209 08:44:50.993282 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:50Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.009969 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.025080 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.040933 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.057353 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.070846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.070878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.070889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.070905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.070921 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:51Z","lastTransitionTime":"2025-12-09T08:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.072703 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.087108 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.113122 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:34Z\\\",\\\"message\\\":\\\"d openshift-multus/network-metrics-daemon-v58s4\\\\nI1209 08:44:34.176633 6522 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 08:44:34.176671 6522 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.128151 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.141615 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.155591 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.168357 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.173398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.173500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.173516 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.173542 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.173556 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:51Z","lastTransitionTime":"2025-12-09T08:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.187610 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.187672 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.187722 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:51 crc kubenswrapper[4786]: E1209 08:44:51.187792 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:51 crc kubenswrapper[4786]: E1209 08:44:51.187883 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:51 crc kubenswrapper[4786]: E1209 08:44:51.187980 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.276053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.276107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.276118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.276137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.276149 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:51Z","lastTransitionTime":"2025-12-09T08:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.380003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.380044 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.380052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.380068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.380078 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:51Z","lastTransitionTime":"2025-12-09T08:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.483839 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.483890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.483900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.483915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.483950 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:51Z","lastTransitionTime":"2025-12-09T08:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.587588 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.587646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.587663 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.587692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.587712 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:51Z","lastTransitionTime":"2025-12-09T08:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.691184 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.691241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.691255 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.691274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.691286 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:51Z","lastTransitionTime":"2025-12-09T08:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.793734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.793773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.793784 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.793803 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.793815 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:51Z","lastTransitionTime":"2025-12-09T08:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.897202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.897283 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.897307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.897341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.897366 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:51Z","lastTransitionTime":"2025-12-09T08:44:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.906243 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/2.log" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.907102 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/1.log" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.910790 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785" exitCode=1 Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.910842 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785"} Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.910912 4786 scope.go:117] "RemoveContainer" containerID="cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.912463 4786 scope.go:117] "RemoveContainer" containerID="dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785" Dec 09 08:44:51 crc kubenswrapper[4786]: E1209 08:44:51.912728 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.931282 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.944477 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.957146 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.977221 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://cce596179b4556f7ed491d4349bc32ebdf5d3f159de5bdd52766ea3245c342b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:34Z\\\",\\\"message\\\":\\\"d openshift-multus/network-metrics-daemon-v58s4\\\\nI1209 08:44:34.176633 6522 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8881-93dfa8d30337}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1209 08:44:34.176671 6522 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-network-console/networking-console-plugin]} name:Service_openshift-network-console/networking-console-plugin_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.246:9443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {ab0b1d51-5ec6-479b-8\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:51 crc kubenswrapper[4786]: I1209 08:44:51.992823 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:51Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.001414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.001468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.001479 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.001499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.001519 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.013087 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.027095 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.044847 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.058767 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.085056 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.103534 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.104731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.104767 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.104776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.104793 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.104804 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.122357 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.139500 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.154139 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.169816 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.185910 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.187455 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:52 crc kubenswrapper[4786]: E1209 08:44:52.187583 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.199185 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.207160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.207206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.207218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.207238 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.207251 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.309354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.309481 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.309500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.309826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.310042 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.412819 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.412864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.412879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.412897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.412909 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.515724 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.515777 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.515789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.515807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.515819 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.618851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.618912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.618935 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.618964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.618985 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.722786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.722851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.722870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.722896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.722916 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.826133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.826205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.826224 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.826251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.826271 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.917931 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/2.log" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.924522 4786 scope.go:117] "RemoveContainer" containerID="dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785" Dec 09 08:44:52 crc kubenswrapper[4786]: E1209 08:44:52.924783 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.929498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.929550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.929584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.929602 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.929616 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:52Z","lastTransitionTime":"2025-12-09T08:44:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.939029 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.954195 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.968341 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.982010 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:52 crc kubenswrapper[4786]: I1209 08:44:52.996674 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:52Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.016694 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.029170 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.032677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.032716 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.032727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.032750 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.032766 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.045394 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.059516 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.073939 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.090195 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.106506 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.117283 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.135597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.135948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.136014 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.136080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.136135 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.137909 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.151089 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.166101 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.179512 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:53Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.187862 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.187970 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:53 crc kubenswrapper[4786]: E1209 08:44:53.188013 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.187852 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:53 crc kubenswrapper[4786]: E1209 08:44:53.188266 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:53 crc kubenswrapper[4786]: E1209 08:44:53.188338 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.239381 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.239485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.239496 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.239539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.239554 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.343362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.343415 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.343465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.343486 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.343498 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.446609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.446969 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.447077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.447180 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.447269 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.550042 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.550122 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.550159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.550184 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.550206 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.653582 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.653650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.653662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.653679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.653691 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.757076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.757130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.757143 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.757167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.757184 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.860717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.860769 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.860780 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.860794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.860805 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.964701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.964768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.964779 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.964802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:53 crc kubenswrapper[4786]: I1209 08:44:53.964816 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:53Z","lastTransitionTime":"2025-12-09T08:44:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.067798 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.067841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.067852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.067869 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.067881 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.170477 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.170534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.170547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.170565 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.170578 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.187935 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:54 crc kubenswrapper[4786]: E1209 08:44:54.188126 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.273686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.273761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.273774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.273798 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.273811 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.377498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.377552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.377563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.377580 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.377591 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.481308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.481363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.481386 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.481412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.481487 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.585566 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.585624 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.585635 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.585656 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.585669 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.689726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.689777 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.689789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.689809 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.689825 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.792667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.792719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.792732 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.792754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.792766 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.895783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.895910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.895925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.895948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.895960 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.998792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.998855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.998873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.998896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:54 crc kubenswrapper[4786]: I1209 08:44:54.998912 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:54Z","lastTransitionTime":"2025-12-09T08:44:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.102692 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.102761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.102776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.102802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.102817 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:55Z","lastTransitionTime":"2025-12-09T08:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.187788 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.187857 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.187919 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:55 crc kubenswrapper[4786]: E1209 08:44:55.187974 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:55 crc kubenswrapper[4786]: E1209 08:44:55.188114 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:55 crc kubenswrapper[4786]: E1209 08:44:55.188267 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.205997 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.206787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.206836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.206847 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.206867 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.206879 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:55Z","lastTransitionTime":"2025-12-09T08:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.218703 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.232539 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.244761 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.262228 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.279134 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.294690 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.308974 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.310240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.310277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.310288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.310308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.310319 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:55Z","lastTransitionTime":"2025-12-09T08:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.321905 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.337489 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.351272 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.361981 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.378553 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.394868 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.406024 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.414340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.414379 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.414452 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.414474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.414488 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:55Z","lastTransitionTime":"2025-12-09T08:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.425248 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.438573 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:55Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.517289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.517356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.517366 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.517386 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.517397 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:55Z","lastTransitionTime":"2025-12-09T08:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.620201 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.620982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.621073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.621157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.621232 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:55Z","lastTransitionTime":"2025-12-09T08:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.725233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.725300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.725309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.725332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.725361 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:55Z","lastTransitionTime":"2025-12-09T08:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.828833 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.829247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.829375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.829584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.829800 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:55Z","lastTransitionTime":"2025-12-09T08:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.932658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.932725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.932743 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.932768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:55 crc kubenswrapper[4786]: I1209 08:44:55.932783 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:55Z","lastTransitionTime":"2025-12-09T08:44:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.036716 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.036770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.036800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.036816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.036825 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.139640 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.139709 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.139725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.139750 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.139768 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.188050 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:56 crc kubenswrapper[4786]: E1209 08:44:56.188256 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.243734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.243799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.243817 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.243841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.243856 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.347230 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.347286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.347298 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.347319 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.347332 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.451396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.451482 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.451500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.451521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.451535 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.554210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.554256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.554266 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.554321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.554331 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.657244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.657945 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.659075 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.659278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.659370 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.762857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.763190 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.763280 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.763374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.763479 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.867880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.868238 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.868392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.868630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.868733 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.971864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.971915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.971925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.971945 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:56 crc kubenswrapper[4786]: I1209 08:44:56.971958 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:56Z","lastTransitionTime":"2025-12-09T08:44:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.074971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.075014 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.075025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.075041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.075053 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:57Z","lastTransitionTime":"2025-12-09T08:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.177920 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.177963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.177973 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.177992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.178003 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:57Z","lastTransitionTime":"2025-12-09T08:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.187350 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.187444 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.187497 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:57 crc kubenswrapper[4786]: E1209 08:44:57.187591 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:57 crc kubenswrapper[4786]: E1209 08:44:57.187697 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:57 crc kubenswrapper[4786]: E1209 08:44:57.187820 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.280824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.280875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.280888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.280912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.280928 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:57Z","lastTransitionTime":"2025-12-09T08:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.383488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.383535 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.383545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.383564 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.383576 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:57Z","lastTransitionTime":"2025-12-09T08:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.486967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.487032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.487045 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.487065 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.487077 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:57Z","lastTransitionTime":"2025-12-09T08:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.590215 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.590274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.590286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.590303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.590313 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:57Z","lastTransitionTime":"2025-12-09T08:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.694025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.694719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.694757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.694785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.694803 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:57Z","lastTransitionTime":"2025-12-09T08:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.798473 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.798543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.798560 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.798587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.798603 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:57Z","lastTransitionTime":"2025-12-09T08:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.902187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.902261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.902274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.902300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:57 crc kubenswrapper[4786]: I1209 08:44:57.902316 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:57Z","lastTransitionTime":"2025-12-09T08:44:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.005194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.005660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.005721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.005764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.005780 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.111604 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.112016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.112037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.112165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.112180 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.187501 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:44:58 crc kubenswrapper[4786]: E1209 08:44:58.187694 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.215657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.215707 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.215724 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.215744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.215756 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.319627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.319686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.319698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.319724 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.319737 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.427156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.427233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.427250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.427272 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.427288 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.531102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.531142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.531153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.531169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.531182 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.634348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.634386 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.634398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.634417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.634458 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.737623 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.737686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.737698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.737721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.737733 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.840987 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.841099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.841113 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.841138 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.841154 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.944378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.944500 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.944529 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.944567 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:58 crc kubenswrapper[4786]: I1209 08:44:58.944588 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:58Z","lastTransitionTime":"2025-12-09T08:44:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.048759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.048815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.048826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.048873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.048887 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.152418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.152658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.152697 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.152731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.152752 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.187449 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.187522 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.187563 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:44:59 crc kubenswrapper[4786]: E1209 08:44:59.187618 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:44:59 crc kubenswrapper[4786]: E1209 08:44:59.187721 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:44:59 crc kubenswrapper[4786]: E1209 08:44:59.187786 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.255581 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.255653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.255669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.255733 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.255753 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.358548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.358598 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.358611 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.358629 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.358644 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.375896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.375926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.375935 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.375948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.375956 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: E1209 08:44:59.394352 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:59Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.398340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.398370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.398379 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.398391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.398400 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: E1209 08:44:59.413011 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:59Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.417129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.417192 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.417207 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.417230 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.417247 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: E1209 08:44:59.428749 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:59Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.432762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.432852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.432879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.432916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.432941 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: E1209 08:44:59.445585 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:59Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.449383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.449512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.449539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.449571 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.449589 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: E1209 08:44:59.461804 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:44:59Z is after 2025-08-24T17:21:41Z" Dec 09 08:44:59 crc kubenswrapper[4786]: E1209 08:44:59.461964 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.469124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.469218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.469239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.469265 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.469374 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.572641 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.572686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.572696 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.572712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.572722 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.674831 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.674866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.674875 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.674889 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.674898 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.778176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.778235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.778247 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.778268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.778282 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.880946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.881030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.881053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.881085 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.881109 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.983448 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.983498 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.983514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.983530 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:44:59 crc kubenswrapper[4786]: I1209 08:44:59.983539 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:44:59Z","lastTransitionTime":"2025-12-09T08:44:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.086893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.086939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.086950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.086968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.086978 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:00Z","lastTransitionTime":"2025-12-09T08:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.188063 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:00 crc kubenswrapper[4786]: E1209 08:45:00.188260 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.190331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.190380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.190390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.190463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.190474 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:00Z","lastTransitionTime":"2025-12-09T08:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.293093 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.293166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.293185 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.293212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.293228 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:00Z","lastTransitionTime":"2025-12-09T08:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.395968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.396040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.396064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.396088 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.396104 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:00Z","lastTransitionTime":"2025-12-09T08:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.503624 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.503673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.503683 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.503705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.503715 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:00Z","lastTransitionTime":"2025-12-09T08:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.606766 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.606823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.606837 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.606859 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.606873 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:00Z","lastTransitionTime":"2025-12-09T08:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.709271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.709310 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.709318 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.709335 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.709344 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:00Z","lastTransitionTime":"2025-12-09T08:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.812160 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.812198 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.812208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.812240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.812251 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:00Z","lastTransitionTime":"2025-12-09T08:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.915544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.915677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.915694 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.915717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:00 crc kubenswrapper[4786]: I1209 08:45:00.915727 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:00Z","lastTransitionTime":"2025-12-09T08:45:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.018596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.018646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.018657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.018674 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.018685 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.121352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.121396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.121408 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.121459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.121472 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.187283 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.187283 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:01 crc kubenswrapper[4786]: E1209 08:45:01.187509 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:01 crc kubenswrapper[4786]: E1209 08:45:01.187552 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.187308 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:01 crc kubenswrapper[4786]: E1209 08:45:01.187643 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.225304 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.225358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.225376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.225395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.225407 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.328023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.328084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.328099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.328120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.328137 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.431600 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.431674 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.431698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.431728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.431751 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.534657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.534751 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.534791 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.534826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.534848 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.638720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.638778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.638791 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.638810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.638824 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.741213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.741272 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.741283 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.741306 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.741320 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.844302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.844352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.844361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.844378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.844389 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.947810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.947856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.947867 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.947884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:01 crc kubenswrapper[4786]: I1209 08:45:01.947895 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:01Z","lastTransitionTime":"2025-12-09T08:45:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.050736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.050822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.050861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.050910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.050929 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.153789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.153855 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.153870 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.153893 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.153908 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.187939 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:02 crc kubenswrapper[4786]: E1209 08:45:02.188223 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.257300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.257341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.257352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.257367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.257378 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.360261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.360321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.360330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.360347 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.360361 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.463955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.464030 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.464043 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.464065 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.464085 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.567491 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.567563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.567572 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.567587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.567599 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.669964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.670006 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.670015 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.670034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.670044 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.772095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.772140 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.772151 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.772169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.772180 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.876605 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.876665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.876678 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.876698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.876712 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.978993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.979047 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.979060 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.979078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:02 crc kubenswrapper[4786]: I1209 08:45:02.979093 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:02Z","lastTransitionTime":"2025-12-09T08:45:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.082158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.082192 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.082203 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.082216 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.082225 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:03Z","lastTransitionTime":"2025-12-09T08:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.184529 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.184567 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.184576 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.184590 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.184600 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:03Z","lastTransitionTime":"2025-12-09T08:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.187980 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.188012 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:03 crc kubenswrapper[4786]: E1209 08:45:03.188090 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.188161 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:03 crc kubenswrapper[4786]: E1209 08:45:03.188253 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:03 crc kubenswrapper[4786]: E1209 08:45:03.188303 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.199512 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.287775 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.287826 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.287838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.287856 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.287871 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:03Z","lastTransitionTime":"2025-12-09T08:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.389880 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.389920 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.389932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.389947 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.389958 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:03Z","lastTransitionTime":"2025-12-09T08:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.493102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.493166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.493179 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.493198 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.493213 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:03Z","lastTransitionTime":"2025-12-09T08:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.595778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.596514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.596552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.596572 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.596586 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:03Z","lastTransitionTime":"2025-12-09T08:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.614646 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:03 crc kubenswrapper[4786]: E1209 08:45:03.614852 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:45:03 crc kubenswrapper[4786]: E1209 08:45:03.614953 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs podName:e6f68306-ac39-4d61-8c27-12d69cc49a4f nodeName:}" failed. No retries permitted until 2025-12-09 08:45:35.614909648 +0000 UTC m=+101.498530864 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs") pod "network-metrics-daemon-v58s4" (UID: "e6f68306-ac39-4d61-8c27-12d69cc49a4f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.698984 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.699038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.699050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.699069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.699080 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:03Z","lastTransitionTime":"2025-12-09T08:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.800897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.800937 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.800946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.800964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.800973 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:03Z","lastTransitionTime":"2025-12-09T08:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.903977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.904089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.904123 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.904157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:03 crc kubenswrapper[4786]: I1209 08:45:03.904184 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:03Z","lastTransitionTime":"2025-12-09T08:45:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.007550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.007611 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.007626 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.007646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.007663 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.110111 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.110173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.110186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.110205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.110219 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.188012 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:04 crc kubenswrapper[4786]: E1209 08:45:04.188181 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.212621 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.212699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.212712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.212731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.212744 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.314762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.314799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.314811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.314828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.314839 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.417352 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.417394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.417406 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.417437 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.417450 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.520792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.520859 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.520876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.520899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.520913 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.623310 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.623360 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.623373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.623398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.623411 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.726319 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.726395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.726450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.726485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.726503 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.828845 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.828896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.828908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.828924 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.828934 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.931289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.931340 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.931349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.931371 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:04 crc kubenswrapper[4786]: I1209 08:45:04.931384 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:04Z","lastTransitionTime":"2025-12-09T08:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.034390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.034446 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.034459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.034475 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.034484 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.137367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.137465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.137485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.137509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.137525 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.187193 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.187241 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.187193 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:05 crc kubenswrapper[4786]: E1209 08:45:05.187521 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:05 crc kubenswrapper[4786]: E1209 08:45:05.189466 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:05 crc kubenswrapper[4786]: E1209 08:45:05.190043 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.191174 4786 scope.go:117] "RemoveContainer" containerID="dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785" Dec 09 08:45:05 crc kubenswrapper[4786]: E1209 08:45:05.191382 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.208268 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550af6de-acd3-45b7-be48-63afa30356f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.230124 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.240544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.240630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.240648 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.240701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.240720 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.247740 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.267803 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.284551 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.301203 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.319923 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.332629 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.343917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.343971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.343984 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.344002 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.344014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.345280 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.357520 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.367579 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.385680 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.397516 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.414760 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.430192 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.445080 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.447547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.447587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.447597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.447610 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.447619 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.458917 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.470503 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:05Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.550668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.550720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.550737 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.550760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.550778 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.653252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.653299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.653313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.653330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.653341 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.757081 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.757120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.757129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.757142 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.757151 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.859562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.859603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.859612 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.859626 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.859636 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.962308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.962386 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.962401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.962443 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:05 crc kubenswrapper[4786]: I1209 08:45:05.962458 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:05Z","lastTransitionTime":"2025-12-09T08:45:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.065186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.065233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.065248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.065269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.065290 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.169517 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.169585 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.169600 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.169623 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.169642 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.187124 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:06 crc kubenswrapper[4786]: E1209 08:45:06.187303 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.272993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.273071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.273105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.273133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.273153 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.377607 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.377688 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.377717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.377750 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.377772 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.481625 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.481694 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.481708 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.481733 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.481751 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.585093 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.585153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.585170 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.585196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.585212 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.688509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.688603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.688630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.688662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.688679 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.791852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.791903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.791915 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.791936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.791950 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.894583 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.894704 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.894722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.894740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.894753 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.997015 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.997084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.997103 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.997128 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:06 crc kubenswrapper[4786]: I1209 08:45:06.997145 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:06Z","lastTransitionTime":"2025-12-09T08:45:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.099989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.100024 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.100034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.100049 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.100057 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:07Z","lastTransitionTime":"2025-12-09T08:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.190624 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.190796 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:07 crc kubenswrapper[4786]: E1209 08:45:07.190839 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.190908 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:07 crc kubenswrapper[4786]: E1209 08:45:07.191099 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:07 crc kubenswrapper[4786]: E1209 08:45:07.191200 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.203632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.203673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.203685 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.203706 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.203719 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:07Z","lastTransitionTime":"2025-12-09T08:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.306764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.306815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.306836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.306862 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.306878 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:07Z","lastTransitionTime":"2025-12-09T08:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.409675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.409721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.409731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.409745 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.409757 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:07Z","lastTransitionTime":"2025-12-09T08:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.513486 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.513530 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.513539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.513554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.513566 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:07Z","lastTransitionTime":"2025-12-09T08:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.616527 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.616572 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.616584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.616602 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.616613 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:07Z","lastTransitionTime":"2025-12-09T08:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.719811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.719867 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.719885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.719910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.719924 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:07Z","lastTransitionTime":"2025-12-09T08:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.822416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.822495 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.822506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.822524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.822535 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:07Z","lastTransitionTime":"2025-12-09T08:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.925596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.925646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.925664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.925686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.925700 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:07Z","lastTransitionTime":"2025-12-09T08:45:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.975705 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/0.log" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.975747 4786 generic.go:334] "Generic (PLEG): container finished" podID="a0a865e2-8504-473d-a23f-fc682d053a9f" containerID="2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505" exitCode=1 Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.975774 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27hfj" event={"ID":"a0a865e2-8504-473d-a23f-fc682d053a9f","Type":"ContainerDied","Data":"2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505"} Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.976116 4786 scope.go:117] "RemoveContainer" containerID="2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505" Dec 09 08:45:07 crc kubenswrapper[4786]: I1209 08:45:07.989679 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:07Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.003715 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.016081 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"2025-12-09T08:44:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4\\\\n2025-12-09T08:44:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4 to /host/opt/cni/bin/\\\\n2025-12-09T08:44:22Z [verbose] multus-daemon started\\\\n2025-12-09T08:44:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T08:45:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.028299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.028357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.028381 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.028410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.028449 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.028967 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.040687 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.051447 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550af6de-acd3-45b7-be48-63afa30356f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.079403 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.091218 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.102914 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.114135 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.127686 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.130370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.130402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.130417 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.130454 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.130467 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.139788 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.150112 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.170901 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.251017 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:08 crc kubenswrapper[4786]: E1209 08:45:08.251163 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.252204 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.252249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.252259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.252278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.252290 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.262534 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.276352 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.301072 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.314312 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.354579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.354610 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.354621 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.354640 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.354652 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.456755 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.456790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.456799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.456814 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.456824 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.559022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.559071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.559081 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.559102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.559114 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.660823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.660852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.660860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.660872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.660882 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.763884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.763946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.763968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.763992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.764010 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.866899 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.866953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.866974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.867004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.867025 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.970538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.970604 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.970628 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.970661 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.970679 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:08Z","lastTransitionTime":"2025-12-09T08:45:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.983125 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/0.log" Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.983187 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27hfj" event={"ID":"a0a865e2-8504-473d-a23f-fc682d053a9f","Type":"ContainerStarted","Data":"93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406"} Dec 09 08:45:08 crc kubenswrapper[4786]: I1209 08:45:08.997988 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:08Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.010249 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.019983 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.041642 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.054365 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.072693 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.072980 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.073002 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.073012 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.073027 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.073039 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.083801 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.095511 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.105858 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.121635 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.132215 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"2025-12-09T08:44:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4\\\\n2025-12-09T08:44:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4 to /host/opt/cni/bin/\\\\n2025-12-09T08:44:22Z [verbose] multus-daemon started\\\\n2025-12-09T08:44:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T08:45:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.143408 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.154763 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.165040 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.182728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.182776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.182785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.182801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.182812 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.186309 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.187360 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.187386 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:09 crc kubenswrapper[4786]: E1209 08:45:09.187511 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.187549 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:09 crc kubenswrapper[4786]: E1209 08:45:09.187679 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:09 crc kubenswrapper[4786]: E1209 08:45:09.187745 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.201502 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.212457 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550af6de-acd3-45b7-be48-63afa30356f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.225045 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.286310 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.286365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.286376 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.286401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.286415 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.389028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.389095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.389114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.389149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.389172 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.492194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.492264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.492281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.492306 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.492323 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.595041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.595130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.595149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.595171 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.595186 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.697587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.697644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.697659 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.697682 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.697697 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.698860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.698905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.698917 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.698932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.698942 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: E1209 08:45:09.712833 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.717687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.717750 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.717765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.717786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.717805 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: E1209 08:45:09.734346 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.739076 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.739113 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.739125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.739143 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.739154 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: E1209 08:45:09.759219 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.764357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.764412 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.764465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.764490 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.764508 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: E1209 08:45:09.782220 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.786112 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.786202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.786226 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.786257 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.786280 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: E1209 08:45:09.809860 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:09Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:09 crc kubenswrapper[4786]: E1209 08:45:09.810024 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.812637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.812735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.813415 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.813492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.813511 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.916101 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.916164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.916183 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.916206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:09 crc kubenswrapper[4786]: I1209 08:45:09.916223 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:09Z","lastTransitionTime":"2025-12-09T08:45:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.018655 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.018726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.018743 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.018765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.018779 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.122031 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.122107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.122124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.122146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.122159 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.187605 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:10 crc kubenswrapper[4786]: E1209 08:45:10.187782 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.226688 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.226761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.226786 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.226823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.226849 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.329925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.330459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.330691 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.330934 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.331141 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.436209 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.436758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.436968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.437212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.437333 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.541087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.541476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.541662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.541813 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.541936 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.645799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.646222 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.646362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.646624 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.646840 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.750387 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.750489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.750509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.750534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.750551 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.857264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.857346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.857368 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.857393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.857413 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.960979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.961237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.961323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.961416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:10 crc kubenswrapper[4786]: I1209 08:45:10.961535 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:10Z","lastTransitionTime":"2025-12-09T08:45:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.064761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.065490 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.065603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.065685 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.065762 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.169904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.169965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.169978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.170000 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.170016 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.187546 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.187574 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.187679 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:11 crc kubenswrapper[4786]: E1209 08:45:11.187714 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:11 crc kubenswrapper[4786]: E1209 08:45:11.187795 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:11 crc kubenswrapper[4786]: E1209 08:45:11.187867 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.273487 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.273580 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.273613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.273647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.273671 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.376897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.376970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.376994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.377026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.377052 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.479656 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.479734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.479768 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.479801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.479822 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.582738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.582803 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.582828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.582858 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.582878 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.686023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.686105 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.686124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.686149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.686168 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.788804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.788863 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.788881 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.788903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.788920 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.892032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.892104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.892121 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.892147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.892166 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.994602 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.994753 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.994776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.994858 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:11 crc kubenswrapper[4786]: I1209 08:45:11.994882 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:11Z","lastTransitionTime":"2025-12-09T08:45:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.099012 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.099078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.099098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.099124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.099142 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:12Z","lastTransitionTime":"2025-12-09T08:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.188066 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:12 crc kubenswrapper[4786]: E1209 08:45:12.188282 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.202320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.202384 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.202392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.202413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.202441 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:12Z","lastTransitionTime":"2025-12-09T08:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.305644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.305698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.305716 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.305747 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.305789 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:12Z","lastTransitionTime":"2025-12-09T08:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.408970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.409017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.409028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.409045 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.409060 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:12Z","lastTransitionTime":"2025-12-09T08:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.512832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.512903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.512928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.512961 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.512991 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:12Z","lastTransitionTime":"2025-12-09T08:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.620823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.620923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.620951 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.620986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.621023 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:12Z","lastTransitionTime":"2025-12-09T08:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.723588 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.723669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.723687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.723713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.723730 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:12Z","lastTransitionTime":"2025-12-09T08:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.826283 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.826336 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.826349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.826366 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.826379 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:12Z","lastTransitionTime":"2025-12-09T08:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.929022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.929065 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.929074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.929092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:12 crc kubenswrapper[4786]: I1209 08:45:12.929102 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:12Z","lastTransitionTime":"2025-12-09T08:45:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.031325 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.031362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.031371 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.031384 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.031394 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.135738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.135780 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.135790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.135813 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.135823 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.188272 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.188382 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.188502 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:13 crc kubenswrapper[4786]: E1209 08:45:13.188777 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:13 crc kubenswrapper[4786]: E1209 08:45:13.188906 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:13 crc kubenswrapper[4786]: E1209 08:45:13.189176 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.238018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.238056 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.238073 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.238095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.238108 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.340173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.340216 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.340228 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.340244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.340256 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.444001 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.444356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.444470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.444573 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.444667 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.547202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.547509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.547637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.547781 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.547909 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.651234 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.651326 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.651345 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.651374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.651393 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.754957 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.755329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.755545 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.755760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.755949 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.859714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.859810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.859829 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.859852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.859868 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.962818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.962890 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.962908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.962938 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:13 crc kubenswrapper[4786]: I1209 08:45:13.962957 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:13Z","lastTransitionTime":"2025-12-09T08:45:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.066279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.066836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.067218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.067406 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.067595 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:14Z","lastTransitionTime":"2025-12-09T08:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.171241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.171303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.171320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.171344 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.171360 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:14Z","lastTransitionTime":"2025-12-09T08:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.187938 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:14 crc kubenswrapper[4786]: E1209 08:45:14.188147 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.273928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.273965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.273977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.273992 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.274002 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:14Z","lastTransitionTime":"2025-12-09T08:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.376316 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.376701 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.376836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.376962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.377068 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:14Z","lastTransitionTime":"2025-12-09T08:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.480135 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.480615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.480647 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.480682 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.480697 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:14Z","lastTransitionTime":"2025-12-09T08:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.583863 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.583948 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.583974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.584006 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.584031 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:14Z","lastTransitionTime":"2025-12-09T08:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.688139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.688200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.688217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.688241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.688257 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:14Z","lastTransitionTime":"2025-12-09T08:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.792391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.792514 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.792544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.792574 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.792597 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:14Z","lastTransitionTime":"2025-12-09T08:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.896761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.896824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.896838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.896866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:14 crc kubenswrapper[4786]: I1209 08:45:14.896889 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:14Z","lastTransitionTime":"2025-12-09T08:45:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.000845 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.000932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.000951 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.000977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.000997 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.105139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.105186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.105195 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.105211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.105220 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.188062 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.188584 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.188861 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:15 crc kubenswrapper[4786]: E1209 08:45:15.190254 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:15 crc kubenswrapper[4786]: E1209 08:45:15.190945 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:15 crc kubenswrapper[4786]: E1209 08:45:15.191224 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.214800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.214843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.214854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.214872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.214885 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.218169 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.220228 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.236505 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.252157 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.283855 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.301247 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.318492 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.320228 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.320293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.320308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.320331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.320348 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.335691 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.350691 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"2025-12-09T08:44:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4\\\\n2025-12-09T08:44:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4 to /host/opt/cni/bin/\\\\n2025-12-09T08:44:22Z [verbose] multus-daemon started\\\\n2025-12-09T08:44:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T08:45:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.374207 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.387981 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.412913 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550af6de-acd3-45b7-be48-63afa30356f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.423358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.423405 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.423419 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.423471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.423486 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.432288 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.446674 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.458904 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.475377 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.511255 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.526683 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.526726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.526739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.526758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.526770 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.531473 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.542716 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:15Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.629728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.629785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.629800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.629821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.629835 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.732361 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.732408 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.732476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.732502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.732516 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.835290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.835360 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.835383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.835414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.835485 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.939317 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.939378 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.939402 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.939470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:15 crc kubenswrapper[4786]: I1209 08:45:15.939493 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:15Z","lastTransitionTime":"2025-12-09T08:45:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.042599 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.042646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.042659 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.042677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.042688 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.145698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.145744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.145756 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.145773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.145787 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.192036 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:16 crc kubenswrapper[4786]: E1209 08:45:16.192221 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.248619 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.248658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.248671 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.248691 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.248704 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.350816 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.350860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.350868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.350883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.350896 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.453639 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.453680 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.453690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.453705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.453714 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.556695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.556749 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.556764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.556784 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.556799 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.659591 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.659628 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.659637 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.659679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.659690 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.762367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.762405 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.762414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.762445 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.762453 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.865176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.865223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.865235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.865257 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.865271 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.970201 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.970284 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.970322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.970356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:16 crc kubenswrapper[4786]: I1209 08:45:16.970383 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:16Z","lastTransitionTime":"2025-12-09T08:45:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.074034 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.074119 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.074178 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.074208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.074233 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:17Z","lastTransitionTime":"2025-12-09T08:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.176348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.176383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.176393 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.176408 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.176422 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:17Z","lastTransitionTime":"2025-12-09T08:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.188180 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.188297 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.188211 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:17 crc kubenswrapper[4786]: E1209 08:45:17.188578 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:17 crc kubenswrapper[4786]: E1209 08:45:17.188902 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:17 crc kubenswrapper[4786]: E1209 08:45:17.188982 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.190221 4786 scope.go:117] "RemoveContainer" containerID="dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.279182 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.279475 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.279485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.279499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.279511 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:17Z","lastTransitionTime":"2025-12-09T08:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.382239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.382312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.382334 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.382363 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.382385 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:17Z","lastTransitionTime":"2025-12-09T08:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.486301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.486339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.486349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.486367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.486377 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:17Z","lastTransitionTime":"2025-12-09T08:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.589627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.589677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.589689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.589712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.589727 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:17Z","lastTransitionTime":"2025-12-09T08:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.700828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.700897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.700919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.700946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.700965 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:17Z","lastTransitionTime":"2025-12-09T08:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.804367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.804480 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.804509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.804541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.804564 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:17Z","lastTransitionTime":"2025-12-09T08:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.926836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.926873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.926885 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.926901 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:17 crc kubenswrapper[4786]: I1209 08:45:17.926912 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:17Z","lastTransitionTime":"2025-12-09T08:45:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.029854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.029888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.029898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.029912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.029921 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.034021 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/2.log" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.037386 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.037855 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.050930 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.066209 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.089537 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.103613 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550af6de-acd3-45b7-be48-63afa30356f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.132505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.132552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.132563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.132578 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.132587 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.136545 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.146941 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.157872 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.173911 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.188072 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:18 crc kubenswrapper[4786]: E1209 08:45:18.188269 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.190268 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.203475 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.218245 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.228458 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.234630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.234673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.234684 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.234699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.234710 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.247106 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5ed48f-0d6b-40f5-b3f7-03a423521b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9f3bf4166653ae69b3f01c8cb49b5a8eab5f682cb50f4e8c62973615693822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49c5c23fc6ac0350de2fc1e2cdd01a326ff50bf4bda4a326e99abf9415ff82c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://941d38be9a1314a3460bc00d016f09a53b7634ce19d321301b9bbc8943d94b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b17422905bc82f40afb7f46066985ffa94d33cba16d2fd3328bd6aa0df365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29040d6b65393bf5f762155aba99cda093a423961bd73adb30abccbcbd2fd105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.258089 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.274144 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.285072 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.296847 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.307673 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"2025-12-09T08:44:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4\\\\n2025-12-09T08:44:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4 to /host/opt/cni/bin/\\\\n2025-12-09T08:44:22Z [verbose] multus-daemon started\\\\n2025-12-09T08:44:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T08:45:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.317703 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:18Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.336838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.336878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.336887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.336903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.336913 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.439983 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.440032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.440041 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.440057 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.440072 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.543003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.543058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.543070 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.543096 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.543113 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.645965 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.646007 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.646018 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.646036 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.646068 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.748665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.748735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.748745 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.748759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.748768 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.855902 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.855954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.855968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.855984 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.855993 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.958618 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.958673 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.958698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.958720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:18 crc kubenswrapper[4786]: I1209 08:45:18.958735 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:18Z","lastTransitionTime":"2025-12-09T08:45:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.061369 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.061450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.061471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.061497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.061512 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.164698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.164743 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.164758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.164779 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.164794 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.187588 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.187685 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.187591 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:19 crc kubenswrapper[4786]: E1209 08:45:19.187810 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:19 crc kubenswrapper[4786]: E1209 08:45:19.187902 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:19 crc kubenswrapper[4786]: E1209 08:45:19.188023 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.267853 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.267921 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.267942 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.267969 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.267990 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.371121 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.371528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.371543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.371561 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.371572 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.474094 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.474159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.474181 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.474205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.474220 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.576752 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.576800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.576809 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.576825 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.576836 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.679519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.679585 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.679609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.679644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.679669 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.783233 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.783394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.783463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.783509 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.783532 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.886318 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.886664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.886730 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.886793 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.886849 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.990201 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.990281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.990303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.990337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:19 crc kubenswrapper[4786]: I1209 08:45:19.990359 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:19Z","lastTransitionTime":"2025-12-09T08:45:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.049656 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/3.log" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.050747 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/2.log" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.055076 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" exitCode=1 Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.055142 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.055242 4786 scope.go:117] "RemoveContainer" containerID="dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.056709 4786 scope.go:117] "RemoveContainer" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:45:20 crc kubenswrapper[4786]: E1209 08:45:20.057027 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.076075 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.094023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.094096 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.094116 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.094141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.094159 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.109749 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5ed48f-0d6b-40f5-b3f7-03a423521b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9f3bf4166653ae69b3f01c8cb49b5a8eab5f682cb50f4e8c62973615693822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49c5c23fc6ac0350de2fc1e2cdd01a326ff50bf4bda4a326e99abf9415ff82c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://941d38be9a1314a3460bc00d016f09a53b7634ce19d321301b9bbc8943d94b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b17422905bc82f40afb7f46066985ffa94d33cba16d2fd3328bd6aa0df365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29040d6b65393bf5f762155aba99cda093a423961bd73adb30abccbcbd2fd105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.125551 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.148739 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:19Z\\\",\\\"message\\\":\\\"t/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:45:18.767580 7145 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.768547 7145 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.768424 7145 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.771070 7145 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 08:45:18.771097 7145 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 08:45:18.772648 7145 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 08:45:18.772755 7145 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 08:45:18.772823 7145 factory.go:656] Stopping watch factory\\\\nI1209 08:45:18.772852 7145 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 08:45:18.772836 7145 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 08:45:18.772908 7145 ovnkube.go:599] Stopped ovnkube\\\\nI1209 08:45:18.772990 7145 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 08:45:18.773192 7145 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.162952 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.164653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.164706 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.164718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.164736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.164784 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.181945 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.187332 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:20 crc kubenswrapper[4786]: E1209 08:45:20.187849 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:20 crc kubenswrapper[4786]: E1209 08:45:20.188144 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.193052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.193080 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.193090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.193104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.193114 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.204794 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: E1209 08:45:20.206804 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.210847 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.210897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.210906 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.210926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.210937 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.220099 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"2025-12-09T08:44:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4\\\\n2025-12-09T08:44:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4 to /host/opt/cni/bin/\\\\n2025-12-09T08:44:22Z [verbose] multus-daemon started\\\\n2025-12-09T08:44:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T08:45:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: E1209 08:45:20.227618 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.231348 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.231860 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.231913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.231933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.231956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.231973 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.246574 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: E1209 08:45:20.249268 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.253621 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.253660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.253676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.253695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.253708 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.258672 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550af6de-acd3-45b7-be48-63afa30356f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: E1209 08:45:20.265617 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: E1209 08:45:20.265956 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.267634 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.267664 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.267675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.267690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.267703 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.274599 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.289186 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.304194 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.316694 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.332852 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.344719 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.354608 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.365293 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:20Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.370098 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.370133 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.370146 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.370165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.370178 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.473205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.473259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.473272 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.473289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.473301 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.576881 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.576953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.576977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.577012 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.577034 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.680632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.680719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.680747 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.680838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.680868 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.783631 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.783711 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.783740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.783777 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.783818 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.886878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.886919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.886932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.886953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.886966 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.990505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.990576 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.990591 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.990616 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:20 crc kubenswrapper[4786]: I1209 08:45:20.990629 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:20Z","lastTransitionTime":"2025-12-09T08:45:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.060504 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/3.log" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.089328 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.089509 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.089583 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.089732 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:25.089653933 +0000 UTC m=+150.973275369 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.089779 4786 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.089889 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:46:25.089866088 +0000 UTC m=+150.973487314 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.089741 4786 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.090127 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-09 08:46:25.089976561 +0000 UTC m=+150.973597797 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.093476 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.093543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.093564 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.093590 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.093610 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:21Z","lastTransitionTime":"2025-12-09T08:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.188069 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.188069 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.188175 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.188338 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.188395 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.188499 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.190378 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.190454 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.190601 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.190611 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.190630 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.190640 4786 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.190644 4786 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.190655 4786 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.190719 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-09 08:46:25.190701603 +0000 UTC m=+151.074322839 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:45:21 crc kubenswrapper[4786]: E1209 08:45:21.190983 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-09 08:46:25.190733444 +0000 UTC m=+151.074354680 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.196092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.196139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.196165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.196195 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.196210 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:21Z","lastTransitionTime":"2025-12-09T08:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.300418 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.300541 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.300556 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.300576 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.300590 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:21Z","lastTransitionTime":"2025-12-09T08:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.403191 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.403246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.403259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.403277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.403290 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:21Z","lastTransitionTime":"2025-12-09T08:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.505561 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.505611 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.505626 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.505644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.505655 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:21Z","lastTransitionTime":"2025-12-09T08:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.608530 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.608579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.608592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.608615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.608629 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:21Z","lastTransitionTime":"2025-12-09T08:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.711449 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.711492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.711501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.711515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.711525 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:21Z","lastTransitionTime":"2025-12-09T08:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.814972 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.815022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.815035 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.815052 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.815065 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:21Z","lastTransitionTime":"2025-12-09T08:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.917562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.917612 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.917628 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.917645 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:21 crc kubenswrapper[4786]: I1209 08:45:21.917659 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:21Z","lastTransitionTime":"2025-12-09T08:45:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.019946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.020005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.020016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.020039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.020063 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.124533 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.124609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.124627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.124655 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.124677 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.188208 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:22 crc kubenswrapper[4786]: E1209 08:45:22.188482 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.227966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.228109 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.228193 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.228278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.228310 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.331139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.331202 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.331213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.331232 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.331245 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.435521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.435603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.435627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.435658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.435682 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.539648 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.539736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.539746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.539767 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.539781 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.642966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.643013 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.643027 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.643046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.643062 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.746497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.746596 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.746636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.746675 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.746706 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.850380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.850538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.850559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.850587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.850607 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.953501 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.953599 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.953663 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.953688 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:22 crc kubenswrapper[4786]: I1209 08:45:22.953700 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:22Z","lastTransitionTime":"2025-12-09T08:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.057397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.057502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.057518 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.057540 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.057553 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.161165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.161211 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.161222 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.161241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.161253 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.187328 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.187364 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.187533 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:23 crc kubenswrapper[4786]: E1209 08:45:23.187619 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:23 crc kubenswrapper[4786]: E1209 08:45:23.187776 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:23 crc kubenswrapper[4786]: E1209 08:45:23.187858 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.264799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.264872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.264887 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.264914 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.264933 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.368471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.368920 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.369005 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.369087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.369148 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.473058 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.473465 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.473567 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.473668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.474032 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.577032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.577966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.578114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.578257 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.578395 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.682250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.682315 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.682332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.682354 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.682370 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.785016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.785102 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.785114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.785140 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.785154 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.887650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.887725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.887744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.887770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.887788 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.992016 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.992071 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.992085 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.992107 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:23 crc kubenswrapper[4786]: I1209 08:45:23.992127 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:23Z","lastTransitionTime":"2025-12-09T08:45:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.095519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.095580 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.095600 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.095629 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.095649 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:24Z","lastTransitionTime":"2025-12-09T08:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.187285 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:24 crc kubenswrapper[4786]: E1209 08:45:24.187509 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.198333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.198416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.198461 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.198486 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.198502 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:24Z","lastTransitionTime":"2025-12-09T08:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.301168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.301229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.301240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.301259 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.301270 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:24Z","lastTransitionTime":"2025-12-09T08:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.405274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.405321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.405331 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.405356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.405368 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:24Z","lastTransitionTime":"2025-12-09T08:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.510355 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.510408 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.510435 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.510460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.510476 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:24Z","lastTransitionTime":"2025-12-09T08:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.614265 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.614320 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.614336 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.614364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.614379 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:24Z","lastTransitionTime":"2025-12-09T08:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.717072 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.717124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.717137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.717158 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.717174 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:24Z","lastTransitionTime":"2025-12-09T08:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.820156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.820209 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.820218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.820238 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.820249 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:24Z","lastTransitionTime":"2025-12-09T08:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.923176 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.923220 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.923232 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.923252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:24 crc kubenswrapper[4786]: I1209 08:45:24.923267 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:24Z","lastTransitionTime":"2025-12-09T08:45:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.026922 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.026989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.027006 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.027032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.027047 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.130674 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.130754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.130774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.130805 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.130822 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.187127 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.187142 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.187174 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:25 crc kubenswrapper[4786]: E1209 08:45:25.188025 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:25 crc kubenswrapper[4786]: E1209 08:45:25.188178 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:25 crc kubenswrapper[4786]: E1209 08:45:25.188316 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.212032 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.234384 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.234450 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.234460 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.234478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.234487 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.236772 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.264980 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5ed48f-0d6b-40f5-b3f7-03a423521b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9f3bf4166653ae69b3f01c8cb49b5a8eab5f682cb50f4e8c62973615693822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49c5c23fc6ac0350de2fc1e2cdd01a326ff50bf4bda4a326e99abf9415ff82c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://941d38be9a1314a3460bc00d016f09a53b7634ce19d321301b9bbc8943d94b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b17422905bc82f40afb7f46066985ffa94d33cba16d2fd3328bd6aa0df365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29040d6b65393bf5f762155aba99cda093a423961bd73adb30abccbcbd2fd105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.281900 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.312713 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dcc312d8c35d19e4c5637ba885eb19cce98bd3a39564a6370d26e487c72a6785\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:44:51Z\\\",\\\"message\\\":\\\"rver-operator-serving-cert service.alpha.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168 service.beta.openshift.io/serving-cert-signed-by:openshift-service-serving-signer@1740288168] [{config.openshift.io/v1 ClusterVersion version 9101b518-476b-4eea-8fa6-69b0534e5caa 0xc006d6f1fb \\\\u003cnil\\\\u003e}] [] []},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{app: kube-apiserver-operator,},ClusterIP:10.217.5.109,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.5.109],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1209 08:44:51.101360 6732 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handle\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:19Z\\\",\\\"message\\\":\\\"t/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:45:18.767580 7145 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.768547 7145 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.768424 7145 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.771070 7145 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 08:45:18.771097 7145 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 08:45:18.772648 7145 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 08:45:18.772755 7145 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 08:45:18.772823 7145 factory.go:656] Stopping watch factory\\\\nI1209 08:45:18.772852 7145 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 08:45:18.772836 7145 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 08:45:18.772908 7145 ovnkube.go:599] Stopped ovnkube\\\\nI1209 08:45:18.772990 7145 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 08:45:18.773192 7145 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:45:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.327194 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.337563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.337628 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.337646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.337672 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.337689 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.342806 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.359160 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.374537 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"2025-12-09T08:44:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4\\\\n2025-12-09T08:44:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4 to /host/opt/cni/bin/\\\\n2025-12-09T08:44:22Z [verbose] multus-daemon started\\\\n2025-12-09T08:44:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T08:45:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.388416 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.407100 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.420830 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550af6de-acd3-45b7-be48-63afa30356f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.435156 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.444547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.444603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.444615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.444635 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.444648 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.452131 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.468874 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.485693 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.503183 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.520025 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.535323 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:25Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.548622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.548689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.548703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.548728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.548742 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.651310 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.651358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.651372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.651391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.651405 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.754467 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.754536 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.754547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.754568 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.754582 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.858401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.858512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.858534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.858563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.858578 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.961983 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.962064 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.962091 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.962130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:25 crc kubenswrapper[4786]: I1209 08:45:25.962156 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:25Z","lastTransitionTime":"2025-12-09T08:45:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.065471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.065511 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.065521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.065537 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.065546 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.169301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.169392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.169416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.169485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.169508 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.187849 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:26 crc kubenswrapper[4786]: E1209 08:45:26.188191 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.272657 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.272739 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.272763 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.272803 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.272826 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.376404 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.376477 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.376489 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.376506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.376518 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.479993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.480055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.480069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.480092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.480113 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.583638 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.583710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.583731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.583758 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.583783 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.688078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.688147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.688177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.688209 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.688227 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.791744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.792066 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.792161 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.792254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.792333 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.895570 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.895946 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.896104 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.896275 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.896542 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.999585 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.999656 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:26 crc kubenswrapper[4786]: I1209 08:45:26.999674 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:26.999698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:26.999716 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:26Z","lastTransitionTime":"2025-12-09T08:45:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.102020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.102089 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.102106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.102130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.102148 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:27Z","lastTransitionTime":"2025-12-09T08:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.187640 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.187647 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:27 crc kubenswrapper[4786]: E1209 08:45:27.187867 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:27 crc kubenswrapper[4786]: E1209 08:45:27.187984 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.188215 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:27 crc kubenswrapper[4786]: E1209 08:45:27.188487 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.204306 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.204358 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.204375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.204396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.204414 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:27Z","lastTransitionTime":"2025-12-09T08:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.308399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.308493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.308516 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.308544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.308563 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:27Z","lastTransitionTime":"2025-12-09T08:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.412210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.412278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.412301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.412335 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.412359 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:27Z","lastTransitionTime":"2025-12-09T08:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.515481 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.515892 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.516009 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.516118 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.516221 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:27Z","lastTransitionTime":"2025-12-09T08:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.619308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.619715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.619818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.619929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.620034 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:27Z","lastTransitionTime":"2025-12-09T08:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.723307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.723451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.723469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.723504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.723528 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:27Z","lastTransitionTime":"2025-12-09T08:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.827551 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.827589 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.827599 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.827614 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.827624 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:27Z","lastTransitionTime":"2025-12-09T08:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.929726 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.929773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.929787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.929807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:27 crc kubenswrapper[4786]: I1209 08:45:27.929820 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:27Z","lastTransitionTime":"2025-12-09T08:45:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.032835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.032882 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.032897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.032918 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.032933 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.135906 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.136394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.136570 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.136717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.136844 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.188158 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:28 crc kubenswrapper[4786]: E1209 08:45:28.188752 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.240607 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.240651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.240661 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.240677 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.240693 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.344225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.344290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.344306 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.344329 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.344344 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.447062 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.447125 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.447144 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.447170 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.447189 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.550773 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.550824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.550846 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.550863 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.550873 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.654212 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.654261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.654271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.654286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.654296 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.757882 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.757949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.757959 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.757979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.757992 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.861156 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.861217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.861232 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.861254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.861267 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.963719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.963792 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.963810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.963836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:28 crc kubenswrapper[4786]: I1209 08:45:28.963854 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:28Z","lastTransitionTime":"2025-12-09T08:45:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.070519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.070569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.070579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.070613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.070625 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.173898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.173954 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.173966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.173984 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.173998 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.187583 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.187632 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.187632 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:29 crc kubenswrapper[4786]: E1209 08:45:29.187758 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:29 crc kubenswrapper[4786]: E1209 08:45:29.187959 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:29 crc kubenswrapper[4786]: E1209 08:45:29.188022 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.277286 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.277356 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.277382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.277413 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.277471 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.380698 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.380779 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.380798 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.380821 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.380839 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.484130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.484206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.484223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.484249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.484266 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.587219 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.587283 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.587297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.587313 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.587324 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.690249 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.690314 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.690327 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.690346 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.690391 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.792878 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.792936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.792950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.792968 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.792980 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.896057 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.896295 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.896303 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.896316 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.896326 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.998554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.998600 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.998615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.998636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:29 crc kubenswrapper[4786]: I1209 08:45:29.998654 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:29Z","lastTransitionTime":"2025-12-09T08:45:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.100774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.100838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.100851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.100872 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.100884 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.188233 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:30 crc kubenswrapper[4786]: E1209 08:45:30.188803 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.203291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.203337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.203349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.203369 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.203381 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.291189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.291229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.291243 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.291264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.291280 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: E1209 08:45:30.305889 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.310653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.310686 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.310699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.310717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.310728 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: E1209 08:45:30.324419 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.329552 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.329615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.329627 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.329648 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.329667 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: E1209 08:45:30.344905 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.349731 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.349776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.349788 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.349807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.349820 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: E1209 08:45:30.369698 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.374483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.374540 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.374557 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.374579 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.374596 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: E1209 08:45:30.392265 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:30Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:30 crc kubenswrapper[4786]: E1209 08:45:30.392472 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.394705 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.394761 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.394778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.394803 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.394823 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.497538 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.497613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.497668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.497695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.497712 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.602084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.602180 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.602205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.602239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.602273 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.706324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.706382 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.706401 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.706493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.706514 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.809728 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.809785 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.809797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.809814 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.809829 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.913268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.913364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.913391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.913457 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:30 crc kubenswrapper[4786]: I1209 08:45:30.913484 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:30Z","lastTransitionTime":"2025-12-09T08:45:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.017205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.017279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.017302 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.017333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.017354 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.119744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.119810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.119828 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.119854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.119871 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.188051 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.188201 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.188227 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:31 crc kubenswrapper[4786]: E1209 08:45:31.188444 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:31 crc kubenswrapper[4786]: E1209 08:45:31.188589 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:31 crc kubenswrapper[4786]: E1209 08:45:31.188792 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.223074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.223170 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.223194 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.223222 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.223240 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.326790 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.326838 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.326851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.326868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.326880 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.430276 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.430347 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.430366 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.430394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.430412 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.534241 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.534318 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.534357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.534389 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.534413 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.639782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.639842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.639852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.639874 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.639884 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.742605 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.742993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.743167 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.743334 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.743531 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.845928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.845966 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.845975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.845989 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.845999 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.949206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.949252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.949264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.949281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:31 crc kubenswrapper[4786]: I1209 08:45:31.949295 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:31Z","lastTransitionTime":"2025-12-09T08:45:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.052771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.052811 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.052820 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.052836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.052846 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.156117 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.156162 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.156172 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.156189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.156199 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.187079 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:32 crc kubenswrapper[4786]: E1209 08:45:32.187249 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.258643 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.258702 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.258715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.258734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.258746 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.361779 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.361832 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.361844 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.361861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.361874 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.463903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.463977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.463993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.464012 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.464049 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.567246 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.567288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.567297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.567312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.567321 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.669676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.669720 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.669735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.669754 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.669768 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.772857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.772913 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.772926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.772945 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.772956 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.875707 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.875744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.875753 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.875765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.875774 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.979330 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.979398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.979420 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.979493 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:32 crc kubenswrapper[4786]: I1209 08:45:32.979518 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:32Z","lastTransitionTime":"2025-12-09T08:45:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.082851 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.082903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.082914 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.082933 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.082946 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:33Z","lastTransitionTime":"2025-12-09T08:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.185776 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.185858 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.185881 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.185912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.185939 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:33Z","lastTransitionTime":"2025-12-09T08:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.187576 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.187585 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.187645 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:33 crc kubenswrapper[4786]: E1209 08:45:33.187830 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:33 crc kubenswrapper[4786]: E1209 08:45:33.188036 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:33 crc kubenswrapper[4786]: E1209 08:45:33.188808 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.190312 4786 scope.go:117] "RemoveContainer" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:45:33 crc kubenswrapper[4786]: E1209 08:45:33.190660 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.214288 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.238977 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.260332 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"2025-12-09T08:44:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4\\\\n2025-12-09T08:44:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4 to /host/opt/cni/bin/\\\\n2025-12-09T08:44:22Z [verbose] multus-daemon started\\\\n2025-12-09T08:44:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T08:45:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.278143 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.292827 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.293622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.293678 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.293706 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.293738 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.293761 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:33Z","lastTransitionTime":"2025-12-09T08:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.311728 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.327420 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.342330 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550af6de-acd3-45b7-be48-63afa30356f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.364661 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.382542 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.397299 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.397395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.397444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.397468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.397483 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:33Z","lastTransitionTime":"2025-12-09T08:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.398848 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.416222 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.428220 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.440741 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.458123 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.470022 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.492779 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5ed48f-0d6b-40f5-b3f7-03a423521b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9f3bf4166653ae69b3f01c8cb49b5a8eab5f682cb50f4e8c62973615693822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49c5c23fc6ac0350de2fc1e2cdd01a326ff50bf4bda4a326e99abf9415ff82c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://941d38be9a1314a3460bc00d016f09a53b7634ce19d321301b9bbc8943d94b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b17422905bc82f40afb7f46066985ffa94d33cba16d2fd3328bd6aa0df365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29040d6b65393bf5f762155aba99cda093a423961bd73adb30abccbcbd2fd105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.499956 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.499991 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.500006 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.500024 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.500036 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:33Z","lastTransitionTime":"2025-12-09T08:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.507330 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.524967 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:19Z\\\",\\\"message\\\":\\\"t/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:45:18.767580 7145 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.768547 7145 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.768424 7145 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.771070 7145 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 08:45:18.771097 7145 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 08:45:18.772648 7145 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 08:45:18.772755 7145 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 08:45:18.772823 7145 factory.go:656] Stopping watch factory\\\\nI1209 08:45:18.772852 7145 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 08:45:18.772836 7145 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 08:45:18.772908 7145 ovnkube.go:599] Stopped ovnkube\\\\nI1209 08:45:18.772990 7145 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 08:45:18.773192 7145 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:45:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:33Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.603153 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.603224 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.603239 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.603261 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.603275 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:33Z","lastTransitionTime":"2025-12-09T08:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.706213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.706281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.706291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.706308 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.706319 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:33Z","lastTransitionTime":"2025-12-09T08:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.809321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.809375 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.809383 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.809399 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.809408 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:33Z","lastTransitionTime":"2025-12-09T08:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.912011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.912049 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.912060 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.912074 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:33 crc kubenswrapper[4786]: I1209 08:45:33.912083 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:33Z","lastTransitionTime":"2025-12-09T08:45:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.014862 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.014905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.014919 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.014939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.014954 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.116967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.117011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.117026 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.117046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.117106 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.187538 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:34 crc kubenswrapper[4786]: E1209 08:45:34.187741 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.220236 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.220314 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.220349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.220373 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.220385 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.324240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.324285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.324300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.324321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.324335 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.427250 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.427307 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.427323 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.427343 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.427359 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.530535 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.530609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.530623 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.530646 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.530662 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.633254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.633336 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.633351 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.633374 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.633388 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.736025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.736068 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.736077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.736093 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.736105 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.838783 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.839177 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.839256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.839390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.839486 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.943961 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.944822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.944923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.945017 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:34 crc kubenswrapper[4786]: I1209 08:45:34.945117 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:34Z","lastTransitionTime":"2025-12-09T08:45:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.048225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.048256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.048264 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.048278 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.048288 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.150630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.150668 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.150678 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.150694 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.150704 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.187011 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.187081 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:35 crc kubenswrapper[4786]: E1209 08:45:35.187131 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.187144 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:35 crc kubenswrapper[4786]: E1209 08:45:35.187244 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:35 crc kubenswrapper[4786]: E1209 08:45:35.187301 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.202817 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4192277cec4bacdf5b03306055ddad9476d291a086f54f61e8e84690c81bb459\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.212544 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-prw2c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f37b1b5c-1cdc-4a08-9ea3-03dad00b5797\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4d772471b770813d3613512f5748b5bbd05c06733053b007756c12b0bc41b06\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pt6g4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:22Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-prw2c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.227256 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"message\\\":\\\"file observer\\\\nW1209 08:44:13.973493 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1209 08:44:13.973680 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1209 08:44:13.974644 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-881789052/tls.crt::/tmp/serving-cert-881789052/tls.key\\\\\\\"\\\\nI1209 08:44:14.287293 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1209 08:44:14.289941 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1209 08:44:14.289961 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1209 08:44:14.289980 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1209 08:44:14.289985 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1209 08:44:14.295151 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1209 08:44:14.295345 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1209 08:44:14.295166 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1209 08:44:14.295383 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1209 08:44:14.295466 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1209 08:44:14.295492 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1209 08:44:14.295497 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1209 08:44:14.295501 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1209 08:44:14.297957 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:13Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.238018 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"44efb750-799f-44e4-831e-641b81c2f427\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a9e51326999eccaabe872d93f35025f76fe1b7fa116ee0e903b008aabc3f1c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://99f4b7e7d6c283d8d774a43140f27cf0418c5922dd1a11fe8d090d4d3de10661\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://868a90cf80a3cef69e48a0e930c1fb742df308553d072e9ca12fec310bdce1aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1848f8b358c964159350bf72a27a84316dbc4051b5164004527770877f4e280c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.253669 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.253703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.253712 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.253727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.253738 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.258668 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e5ed48f-0d6b-40f5-b3f7-03a423521b9a\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:59Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae9f3bf4166653ae69b3f01c8cb49b5a8eab5f682cb50f4e8c62973615693822\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49c5c23fc6ac0350de2fc1e2cdd01a326ff50bf4bda4a326e99abf9415ff82c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://941d38be9a1314a3460bc00d016f09a53b7634ce19d321301b9bbc8943d94b87\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://74b17422905bc82f40afb7f46066985ffa94d33cba16d2fd3328bd6aa0df365b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://29040d6b65393bf5f762155aba99cda093a423961bd73adb30abccbcbd2fd105\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://663914cd797e1431ab14d696de18d8098d1ac35f9162f36dc5e6a743ba949a44\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://28ac2f54090b4b0620217d2c5b73a031a6901b4df7ed6291bf82b3b35c82078f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42191f7c1da1bf0b03474758a4875d12c6db6aa54fb455fcb92e00957d1923e3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.272225 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-mq8pp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a680f02-0367-4814-9c73-aa5959bf952f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c30204fb4323c7780efec70dabc9074bcda3d7a20c81af161920f5c74075b4b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-98fg6\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-mq8pp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.298822 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:19Z\\\",\\\"message\\\":\\\"t/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI1209 08:45:18.767580 7145 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.768547 7145 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.768424 7145 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1209 08:45:18.771070 7145 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1209 08:45:18.771097 7145 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1209 08:45:18.772648 7145 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1209 08:45:18.772755 7145 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1209 08:45:18.772823 7145 factory.go:656] Stopping watch factory\\\\nI1209 08:45:18.772852 7145 handler.go:208] Removed *v1.Node event handler 2\\\\nI1209 08:45:18.772836 7145 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1209 08:45:18.772908 7145 ovnkube.go:599] Stopped ovnkube\\\\nI1209 08:45:18.772990 7145 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1209 08:45:18.773192 7145 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:45:17Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ksqff\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-7sr4q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.310854 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"373c7fdc-f660-4323-a503-3e7a0dedb865\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6af8039885b93a07e1e521a89a1d240f914eb77cba9bb6db24d7ecdb1ba3954d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de62d60183229189c692a0ff01db0e36332ecde2d2711846121c899befbbde08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pnjzh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:29Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-jwjmr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.323598 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.335194 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50593d0d53a0888b66905e18d3ef9ae44e67a9d290aad760710060fb3ed14393\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e4ad85cbd79a4967d59f069ebadb27a1cd79229d287c2f1c9955396c71a1506\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.348756 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-27hfj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a0a865e2-8504-473d-a23f-fc682d053a9f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-09T08:45:07Z\\\",\\\"message\\\":\\\"2025-12-09T08:44:22+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4\\\\n2025-12-09T08:44:22+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_826ae26f-395b-43bf-ae1a-4535a65bdfb4 to /host/opt/cni/bin/\\\\n2025-12-09T08:44:22Z [verbose] multus-daemon started\\\\n2025-12-09T08:44:22Z [verbose] Readiness Indicator file check\\\\n2025-12-09T08:45:07Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:45:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pcv4g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-27hfj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.356999 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.357037 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.357050 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.357066 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.357078 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.358749 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-v58s4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e6f68306-ac39-4d61-8c27-12d69cc49a4f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lhg5l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:31Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-v58s4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.370830 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d3d4420-fb67-443a-8b97-195c4cf223e9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f7aff8b9b062294cc403e12f60e4e694e626aa9780ae652380afc5d5ebeb25e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e77efb0cd063e6d566f75078ce7a1f2cfd31508d5024a101ebcb3aa3c609aaae\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ac36b7f7721d7c5cdcb9e2254974dab062cca2723d4e723582061ed2696e04c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.380178 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"550af6de-acd3-45b7-be48-63afa30356f4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:43:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9b1b4fc4a2ac23c92b18ce6bd588fe1c0e5e7fead1592aa7795d2f2ad2071507\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:43:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4a0478a8fa00a16c9f91091771b9e219317b593723cf90ae2f8c5f7335bd88c5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:43:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:43:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:43:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.390278 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.399579 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da17081a769df0b66be89013cce4c70d442cfd51b32fdb623bf94052cbcad5d6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.408444 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.417980 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"60c502d4-7f9e-4d39-a197-fa70dc4a56d1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c2d2cb4aa779f69cd01c7af8dd377a93fa75bc95ded7acd32a891de283722a11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9wcct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-86k5n\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.437287 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9e48cbd1-e567-45b7-902a-fb4a0daa2fd3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-09T08:44:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://379f087d607d19a0cb71481b2a7236d2d4769898f612849793c7d0d0f8709e02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:44:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a7dab4730c0fbc210cf2e86830d4d971a43f38dd40426598686bfd0f85670943\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:20Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9c8a9f0c5670a68092f1a64419b123a176a464e8d9cdfb2fdb6617e9fe646fb6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:22Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:21Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://eed214e02f32597f6a7e3c9be1efdf6f0b57a0dbdcc3dd7267d007bcd12279fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0fbb1ca76382d8d8f9a9de3d53ebca0e585b0362c5d782be9e8df738942b4959\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2ec37a618991f1e7e06c04c95db8c40ac18b451bcb08ba568bad540481a740c6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0c10eb59172a342201e5c0fae9b836d296608057f6613791cab920df8aaa5850\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-09T08:44:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-09T08:44:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-p2bms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-09T08:44:17Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-zbwb7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:35Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.460090 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.460138 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.460147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.460165 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.460176 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.562663 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.562733 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.562749 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.562769 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.562784 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.661913 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:35 crc kubenswrapper[4786]: E1209 08:45:35.662155 4786 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:45:35 crc kubenswrapper[4786]: E1209 08:45:35.662234 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs podName:e6f68306-ac39-4d61-8c27-12d69cc49a4f nodeName:}" failed. No retries permitted until 2025-12-09 08:46:39.662214414 +0000 UTC m=+165.545835650 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs") pod "network-metrics-daemon-v58s4" (UID: "e6f68306-ac39-4d61-8c27-12d69cc49a4f") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.665554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.665607 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.665622 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.665645 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.665659 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.767932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.768029 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.768039 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.768054 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.768064 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.870578 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.870620 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.870630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.870644 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.870654 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.973778 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.973848 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.973864 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.973886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:35 crc kubenswrapper[4786]: I1209 08:45:35.973902 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:35Z","lastTransitionTime":"2025-12-09T08:45:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.076843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.076895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.076905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.076923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.076935 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:36Z","lastTransitionTime":"2025-12-09T08:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.180684 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.180744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.180762 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.180789 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.180806 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:36Z","lastTransitionTime":"2025-12-09T08:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.188033 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:36 crc kubenswrapper[4786]: E1209 08:45:36.188263 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.283506 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.283568 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.283582 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.283610 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.283628 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:36Z","lastTransitionTime":"2025-12-09T08:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.386582 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.387009 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.387147 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.387291 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.387453 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:36Z","lastTransitionTime":"2025-12-09T08:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.490164 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.490197 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.490208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.490223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.490232 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:36Z","lastTransitionTime":"2025-12-09T08:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.593612 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.593787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.593815 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.593841 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.593876 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:36Z","lastTransitionTime":"2025-12-09T08:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.697293 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.697727 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.697866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.697982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.698095 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:36Z","lastTransitionTime":"2025-12-09T08:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.800550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.800585 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.800593 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.800606 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.800616 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:36Z","lastTransitionTime":"2025-12-09T08:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.903168 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.903214 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.903227 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.903242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:36 crc kubenswrapper[4786]: I1209 08:45:36.903254 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:36Z","lastTransitionTime":"2025-12-09T08:45:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.006012 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.006106 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.006121 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.006139 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.006151 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.109092 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.109396 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.109478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.109547 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.109619 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.187763 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.187811 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.187806 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:37 crc kubenswrapper[4786]: E1209 08:45:37.188391 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:37 crc kubenswrapper[4786]: E1209 08:45:37.188984 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:37 crc kubenswrapper[4786]: E1209 08:45:37.189046 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.220365 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.220488 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.220513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.220546 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.220573 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.324055 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.324113 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.324123 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.324141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.324150 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.427381 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.427702 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.427770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.427877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.427951 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.531414 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.531490 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.531502 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.531519 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.531529 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.634484 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.634539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.634555 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.634578 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.634596 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.737597 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.737636 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.737645 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.737661 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.737673 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.841582 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.841650 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.841671 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.841695 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.841713 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.944528 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.944573 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.944587 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.944602 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:37 crc kubenswrapper[4786]: I1209 08:45:37.944611 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:37Z","lastTransitionTime":"2025-12-09T08:45:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.047305 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.047357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.047372 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.047394 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.047410 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.149494 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.149532 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.149543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.149559 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.149570 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.187934 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:38 crc kubenswrapper[4786]: E1209 08:45:38.188501 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.252873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.252936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.252949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.252967 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.252977 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.355584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.355661 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.355699 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.355721 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.355735 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.460774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.460848 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.460883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.460985 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.461012 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.564689 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.564735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.564747 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.564765 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.564777 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.669245 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.669395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.669442 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.669464 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.669476 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.772398 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.772456 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.772470 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.772485 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.772494 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.874486 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.874526 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.874544 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.874562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.874574 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.977095 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.977137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.977145 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.977163 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:38 crc kubenswrapper[4786]: I1209 08:45:38.977172 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:38Z","lastTransitionTime":"2025-12-09T08:45:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.080391 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.080516 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.080550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.080584 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.080606 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:39Z","lastTransitionTime":"2025-12-09T08:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.183531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.183603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.183621 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.183651 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.183678 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:39Z","lastTransitionTime":"2025-12-09T08:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.187754 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.187839 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.187781 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:39 crc kubenswrapper[4786]: E1209 08:45:39.187962 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:39 crc kubenswrapper[4786]: E1209 08:45:39.188154 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:39 crc kubenswrapper[4786]: E1209 08:45:39.188298 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.286512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.286592 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.286605 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.286649 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.286663 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:39Z","lastTransitionTime":"2025-12-09T08:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.389252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.389294 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.389322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.389339 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.389349 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:39Z","lastTransitionTime":"2025-12-09T08:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.492807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.492905 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.492924 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.492953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.492972 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:39Z","lastTransitionTime":"2025-12-09T08:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.597787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.597861 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.597874 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.597895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.597910 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:39Z","lastTransitionTime":"2025-12-09T08:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.701245 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.701315 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.701326 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.701344 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.701356 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:39Z","lastTransitionTime":"2025-12-09T08:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.803478 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.803525 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.803536 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.803554 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.803565 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:39Z","lastTransitionTime":"2025-12-09T08:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.906332 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.906459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.906499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.906535 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:39 crc kubenswrapper[4786]: I1209 08:45:39.906555 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:39Z","lastTransitionTime":"2025-12-09T08:45:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.009896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.009960 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.009974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.009993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.010007 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.112723 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.112799 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.112823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.112866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.112928 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.187405 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:40 crc kubenswrapper[4786]: E1209 08:45:40.187719 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.215876 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.215945 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.215969 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.216004 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.216025 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.319407 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.319504 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.319521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.319550 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.319568 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.422747 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.423279 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.423410 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.423603 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.423723 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.526879 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.526977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.526995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.527020 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.527039 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.630665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.630725 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.630744 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.630771 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.630789 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.733687 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.733800 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.733823 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.733848 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.733872 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.749205 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.749342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.749469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.749613 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.749716 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: E1209 08:45:40.770684 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:40Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.776508 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.776539 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.776548 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.776563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.776574 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: E1209 08:45:40.796785 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:40Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.801513 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.801573 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.801594 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.801617 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.801637 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: E1209 08:45:40.821977 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:40Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.826518 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.826563 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.826580 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.826602 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.826620 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: E1209 08:45:40.847953 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:40Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.852794 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.852859 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.852884 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.852912 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.852935 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: E1209 08:45:40.875021 4786 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-09T08:45:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"20132b6b-eea8-47f7-95fa-f658d05fe362\\\",\\\"systemUUID\\\":\\\"d02a19a2-cd20-41fc-84e9-65362968df1f\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-09T08:45:40Z is after 2025-08-24T17:21:41Z" Dec 09 08:45:40 crc kubenswrapper[4786]: E1209 08:45:40.875356 4786 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.877269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.877328 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.877350 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.877377 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.877398 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.979964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.980031 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.980053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.980084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:40 crc kubenswrapper[4786]: I1209 08:45:40.980105 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:40Z","lastTransitionTime":"2025-12-09T08:45:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.090152 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.090235 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.090290 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.090321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.090342 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:41Z","lastTransitionTime":"2025-12-09T08:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.188818 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.189090 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.189188 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:41 crc kubenswrapper[4786]: E1209 08:45:41.189309 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:41 crc kubenswrapper[4786]: E1209 08:45:41.189518 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:41 crc kubenswrapper[4786]: E1209 08:45:41.189717 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.194149 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.194217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.194242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.194273 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.194296 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:41Z","lastTransitionTime":"2025-12-09T08:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.297263 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.297355 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.297367 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.297390 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.297405 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:41Z","lastTransitionTime":"2025-12-09T08:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.400534 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.400595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.400611 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.400633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.400648 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:41Z","lastTransitionTime":"2025-12-09T08:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.504379 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.504492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.504520 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.504543 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.504558 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:41Z","lastTransitionTime":"2025-12-09T08:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.607591 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.607670 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.607714 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.607746 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.607769 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:41Z","lastTransitionTime":"2025-12-09T08:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.710679 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.710740 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.710764 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.710797 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.710819 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:41Z","lastTransitionTime":"2025-12-09T08:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.813943 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.814022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.814051 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.814087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.814114 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:41Z","lastTransitionTime":"2025-12-09T08:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.918812 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.918886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.918903 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.918929 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:41 crc kubenswrapper[4786]: I1209 08:45:41.918946 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:41Z","lastTransitionTime":"2025-12-09T08:45:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.022213 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.022271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.022288 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.022311 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.022329 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.126130 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.126208 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.126221 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.126244 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.126264 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.187238 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:42 crc kubenswrapper[4786]: E1209 08:45:42.187526 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.228662 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.228703 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.228715 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.228734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.228748 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.331818 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.331886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.331900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.331923 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.331990 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.435251 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.435345 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.435364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.435388 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.435405 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.538338 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.538397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.538415 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.538533 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.538551 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.641595 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.641642 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.641652 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.641667 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.641676 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.744195 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.744252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.744271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.744289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.744300 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.847852 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.847911 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.847925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.847945 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.847959 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.950333 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.950400 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.950416 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.950474 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:42 crc kubenswrapper[4786]: I1209 08:45:42.950492 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:42Z","lastTransitionTime":"2025-12-09T08:45:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.053357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.053475 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.053497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.053526 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.053545 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:43Z","lastTransitionTime":"2025-12-09T08:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.155574 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.155681 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.155694 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.155711 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.155722 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:43Z","lastTransitionTime":"2025-12-09T08:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.187364 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.187420 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:43 crc kubenswrapper[4786]: E1209 08:45:43.187585 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.187364 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:43 crc kubenswrapper[4786]: E1209 08:45:43.187874 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:43 crc kubenswrapper[4786]: E1209 08:45:43.187977 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.259021 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.259067 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.259077 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.259094 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.259109 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:43Z","lastTransitionTime":"2025-12-09T08:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.389134 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.389216 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.389252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.389281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.389304 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:43Z","lastTransitionTime":"2025-12-09T08:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.491982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.492025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.492038 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.492053 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.492064 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:43Z","lastTransitionTime":"2025-12-09T08:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.595886 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.595970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.595982 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.595999 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.596014 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:43Z","lastTransitionTime":"2025-12-09T08:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.698896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.698949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.698963 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.698994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.699007 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:43Z","lastTransitionTime":"2025-12-09T08:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.802218 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.802297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.802321 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.802355 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.802378 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:43Z","lastTransitionTime":"2025-12-09T08:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.905527 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.905569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.905581 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.905599 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:43 crc kubenswrapper[4786]: I1209 08:45:43.905610 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:43Z","lastTransitionTime":"2025-12-09T08:45:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.009003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.009067 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.009078 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.009099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.009112 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.113159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.113225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.113242 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.113269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.113296 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.188102 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:44 crc kubenswrapper[4786]: E1209 08:45:44.188274 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.217046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.217173 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.217189 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.217271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.217285 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.319920 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.319969 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.319978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.319993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.320004 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.423120 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.423175 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.423187 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.423206 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.423218 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.526451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.526484 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.526492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.526505 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.526515 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.629217 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.629269 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.629281 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.629305 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.629314 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.733129 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.733199 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.733221 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.733252 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.733288 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.836444 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.836515 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.836531 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.836549 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.836559 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.939672 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.939750 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.939774 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.939804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:44 crc kubenswrapper[4786]: I1209 08:45:44.939827 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:44Z","lastTransitionTime":"2025-12-09T08:45:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.044137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.044223 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.044248 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.044282 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.044306 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.148257 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.148322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.148337 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.148357 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.148371 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.187610 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.187682 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.187610 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:45 crc kubenswrapper[4786]: E1209 08:45:45.188624 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:45 crc kubenswrapper[4786]: E1209 08:45:45.188861 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:45 crc kubenswrapper[4786]: E1209 08:45:45.189057 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.251843 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.251926 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.251951 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.251986 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.252008 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.258915 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-zbwb7" podStartSLOduration=88.258891154 podStartE2EDuration="1m28.258891154s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.257555751 +0000 UTC m=+111.141177027" watchObservedRunningTime="2025-12-09 08:45:45.258891154 +0000 UTC m=+111.142512470" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.259245 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podStartSLOduration=88.259236652 podStartE2EDuration="1m28.259236652s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.2317378 +0000 UTC m=+111.115359126" watchObservedRunningTime="2025-12-09 08:45:45.259236652 +0000 UTC m=+111.142857908" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.294186 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=89.2941632 podStartE2EDuration="1m29.2941632s" podCreationTimestamp="2025-12-09 08:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.277752302 +0000 UTC m=+111.161373558" watchObservedRunningTime="2025-12-09 08:45:45.2941632 +0000 UTC m=+111.177784436" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.315060 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=42.315031849 podStartE2EDuration="42.315031849s" podCreationTimestamp="2025-12-09 08:45:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.294542039 +0000 UTC m=+111.178163335" watchObservedRunningTime="2025-12-09 08:45:45.315031849 +0000 UTC m=+111.198653095" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.355896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.355949 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.355962 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.355981 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.355998 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.374104 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-prw2c" podStartSLOduration=88.374034314 podStartE2EDuration="1m28.374034314s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.373978033 +0000 UTC m=+111.257599279" watchObservedRunningTime="2025-12-09 08:45:45.374034314 +0000 UTC m=+111.257655570" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.441207 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-jwjmr" podStartSLOduration=87.441182722 podStartE2EDuration="1m27.441182722s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.423932334 +0000 UTC m=+111.307553590" watchObservedRunningTime="2025-12-09 08:45:45.441182722 +0000 UTC m=+111.324803968" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.453935 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=86.453908238 podStartE2EDuration="1m26.453908238s" podCreationTimestamp="2025-12-09 08:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.442004533 +0000 UTC m=+111.325625799" watchObservedRunningTime="2025-12-09 08:45:45.453908238 +0000 UTC m=+111.337529484" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.458822 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.458873 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.458888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.458906 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.458920 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.479916 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=62.479878134 podStartE2EDuration="1m2.479878134s" podCreationTimestamp="2025-12-09 08:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.453736614 +0000 UTC m=+111.337357850" watchObservedRunningTime="2025-12-09 08:45:45.479878134 +0000 UTC m=+111.363499360" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.480081 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=30.480076439 podStartE2EDuration="30.480076439s" podCreationTimestamp="2025-12-09 08:45:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.478243624 +0000 UTC m=+111.361864850" watchObservedRunningTime="2025-12-09 08:45:45.480076439 +0000 UTC m=+111.363697675" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.505782 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-mq8pp" podStartSLOduration=89.505728706 podStartE2EDuration="1m29.505728706s" podCreationTimestamp="2025-12-09 08:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.493301327 +0000 UTC m=+111.376922573" watchObservedRunningTime="2025-12-09 08:45:45.505728706 +0000 UTC m=+111.389349952" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.538030 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-27hfj" podStartSLOduration=88.538002297 podStartE2EDuration="1m28.538002297s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:45.537342181 +0000 UTC m=+111.420963407" watchObservedRunningTime="2025-12-09 08:45:45.538002297 +0000 UTC m=+111.421623533" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.561759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.561824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.561835 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.561849 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.561860 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.664609 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.664713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.664735 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.664759 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.664776 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.768348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.768451 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.768471 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.768499 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.768518 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.871069 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.871578 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.871916 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.872169 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.872405 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.975824 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.975892 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.975904 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.975925 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:45 crc kubenswrapper[4786]: I1209 08:45:45.975937 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:45Z","lastTransitionTime":"2025-12-09T08:45:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.079033 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.079099 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.079114 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.079143 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.079161 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:46Z","lastTransitionTime":"2025-12-09T08:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.182345 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.182490 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.182521 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.182558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.182584 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:46Z","lastTransitionTime":"2025-12-09T08:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.187525 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:46 crc kubenswrapper[4786]: E1209 08:45:46.187687 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.285883 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.285927 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.285939 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.285955 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.285966 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:46Z","lastTransitionTime":"2025-12-09T08:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.388562 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.388620 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.388633 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.388658 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.388673 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:46Z","lastTransitionTime":"2025-12-09T08:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.491950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.492046 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.492084 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.492116 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.492140 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:46Z","lastTransitionTime":"2025-12-09T08:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.594348 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.594404 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.594415 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.594459 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.594474 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:46Z","lastTransitionTime":"2025-12-09T08:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.697908 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.697964 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.697975 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.697993 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.698008 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:46Z","lastTransitionTime":"2025-12-09T08:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.801364 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.801455 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.801473 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.801512 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.801529 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:46Z","lastTransitionTime":"2025-12-09T08:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.905011 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.905086 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.905109 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.905137 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:46 crc kubenswrapper[4786]: I1209 08:45:46.905158 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:46Z","lastTransitionTime":"2025-12-09T08:45:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.008240 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.008324 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.008342 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.008370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.008387 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.111449 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.111483 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.111516 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.111536 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.111550 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.188002 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.188115 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.188121 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:47 crc kubenswrapper[4786]: E1209 08:45:47.191759 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:47 crc kubenswrapper[4786]: E1209 08:45:47.192255 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:47 crc kubenswrapper[4786]: E1209 08:45:47.192612 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.193354 4786 scope.go:117] "RemoveContainer" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:45:47 crc kubenswrapper[4786]: E1209 08:45:47.193601 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-7sr4q_openshift-ovn-kubernetes(c8ebe4be-af09-4f22-9dee-af5f7d34bccf)\"" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.214315 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.214370 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.214380 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.214397 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.214416 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.317395 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.317457 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.317469 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.317486 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.317499 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.420950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.421660 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.421702 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.421734 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.421758 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.524810 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.524871 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.524888 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.524914 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.524932 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.627717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.627791 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.627801 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.627836 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.627848 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.731196 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.731262 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.731277 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.731297 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.731312 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.834268 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.834301 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.834309 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.834322 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.834332 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.937087 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.937145 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.937157 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.937174 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:47 crc kubenswrapper[4786]: I1209 08:45:47.937188 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:47Z","lastTransitionTime":"2025-12-09T08:45:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.039868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.039928 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.039950 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.039979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.039999 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.143710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.143991 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.144009 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.144032 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.144044 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.188108 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:48 crc kubenswrapper[4786]: E1209 08:45:48.188331 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.247718 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.247787 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.247804 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.247830 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.247849 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.350569 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.350612 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.350632 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.350653 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.350670 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.454150 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.454200 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.454210 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.454229 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.454241 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.556971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.557022 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.557040 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.557056 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.557066 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.659930 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.659970 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.659979 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.659995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.660005 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.762634 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.762676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.762690 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.762710 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.762724 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.864896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.864960 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.864978 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.865003 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.865023 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.968274 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.968341 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.968362 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.968392 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:48 crc kubenswrapper[4786]: I1209 08:45:48.968414 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:48Z","lastTransitionTime":"2025-12-09T08:45:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.072192 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.072256 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.072267 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.072285 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.072312 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:49Z","lastTransitionTime":"2025-12-09T08:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.174877 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.174936 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.174953 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.174977 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.174995 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:49Z","lastTransitionTime":"2025-12-09T08:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.187515 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.187533 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:49 crc kubenswrapper[4786]: E1209 08:45:49.187722 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:49 crc kubenswrapper[4786]: E1209 08:45:49.187850 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.188063 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:49 crc kubenswrapper[4786]: E1209 08:45:49.188319 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.278300 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.278779 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.278994 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.279159 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.279620 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:49Z","lastTransitionTime":"2025-12-09T08:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.382782 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.382857 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.382874 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.382900 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.382919 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:49Z","lastTransitionTime":"2025-12-09T08:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.486665 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.486719 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.486736 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.486760 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.486778 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:49Z","lastTransitionTime":"2025-12-09T08:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.589702 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.589757 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.589770 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.589788 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.589799 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:49Z","lastTransitionTime":"2025-12-09T08:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.692722 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.692842 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.692866 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.692895 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.692914 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:49Z","lastTransitionTime":"2025-12-09T08:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.796186 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.796226 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.796237 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.796254 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.796269 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:49Z","lastTransitionTime":"2025-12-09T08:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.899131 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.899188 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.899201 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.899221 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:49 crc kubenswrapper[4786]: I1209 08:45:49.899236 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:49Z","lastTransitionTime":"2025-12-09T08:45:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.001463 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.001713 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.001802 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.001868 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.001953 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.106267 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.106615 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.106854 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.107025 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.107143 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.187396 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:50 crc kubenswrapper[4786]: E1209 08:45:50.187615 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.209807 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.209896 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.209910 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.209932 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.209944 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.312589 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.312676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.312696 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.312717 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.312730 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.416085 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.416468 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.416676 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.416850 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.416984 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.520225 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.520271 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.520289 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.520312 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.520328 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.624492 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.624542 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.624558 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.624582 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.624599 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.728070 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.728124 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.728141 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.728166 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.728184 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.831897 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.831971 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.831995 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.832023 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.832042 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.934898 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.934974 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.934998 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.935028 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.935048 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.943349 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.943481 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.943497 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.943524 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 09 08:45:50 crc kubenswrapper[4786]: I1209 08:45:50.943543 4786 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-09T08:45:50Z","lastTransitionTime":"2025-12-09T08:45:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.002228 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w"] Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.003025 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.006254 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.007109 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.007164 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.007418 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.120015 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.120134 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.120180 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.120215 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.120263 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.187576 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.187754 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.188306 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:51 crc kubenswrapper[4786]: E1209 08:45:51.188704 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:51 crc kubenswrapper[4786]: E1209 08:45:51.188998 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:51 crc kubenswrapper[4786]: E1209 08:45:51.189357 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.221863 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.221930 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.221958 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.221995 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.222037 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.222054 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.222144 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.223400 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-service-ca\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.234995 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.241049 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/451a0c0e-8cd7-4426-927b-ea2fedc7e2c0-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-x4z6w\" (UID: \"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:51 crc kubenswrapper[4786]: I1209 08:45:51.325311 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" Dec 09 08:45:52 crc kubenswrapper[4786]: I1209 08:45:52.180026 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" event={"ID":"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0","Type":"ContainerStarted","Data":"1f16e01fba1936b32f946988880fceb95989c06e05d700da365d44cf09665f7b"} Dec 09 08:45:52 crc kubenswrapper[4786]: I1209 08:45:52.180483 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" event={"ID":"451a0c0e-8cd7-4426-927b-ea2fedc7e2c0","Type":"ContainerStarted","Data":"1c723fbb835d39f0bf74a945927d82582776eca7bff97e75db630ae83fa0fa6b"} Dec 09 08:45:52 crc kubenswrapper[4786]: I1209 08:45:52.187537 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:52 crc kubenswrapper[4786]: E1209 08:45:52.187794 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:52 crc kubenswrapper[4786]: I1209 08:45:52.201085 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-x4z6w" podStartSLOduration=95.201067828 podStartE2EDuration="1m35.201067828s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:45:52.197439768 +0000 UTC m=+118.081061024" watchObservedRunningTime="2025-12-09 08:45:52.201067828 +0000 UTC m=+118.084689064" Dec 09 08:45:53 crc kubenswrapper[4786]: I1209 08:45:53.187970 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:53 crc kubenswrapper[4786]: I1209 08:45:53.188043 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:53 crc kubenswrapper[4786]: I1209 08:45:53.187971 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:53 crc kubenswrapper[4786]: E1209 08:45:53.188122 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:53 crc kubenswrapper[4786]: E1209 08:45:53.188230 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:53 crc kubenswrapper[4786]: E1209 08:45:53.188276 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:54 crc kubenswrapper[4786]: I1209 08:45:54.187548 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:54 crc kubenswrapper[4786]: E1209 08:45:54.187899 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:54 crc kubenswrapper[4786]: I1209 08:45:54.192151 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/1.log" Dec 09 08:45:54 crc kubenswrapper[4786]: I1209 08:45:54.192975 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/0.log" Dec 09 08:45:54 crc kubenswrapper[4786]: I1209 08:45:54.193072 4786 generic.go:334] "Generic (PLEG): container finished" podID="a0a865e2-8504-473d-a23f-fc682d053a9f" containerID="93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406" exitCode=1 Dec 09 08:45:54 crc kubenswrapper[4786]: I1209 08:45:54.193144 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27hfj" event={"ID":"a0a865e2-8504-473d-a23f-fc682d053a9f","Type":"ContainerDied","Data":"93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406"} Dec 09 08:45:54 crc kubenswrapper[4786]: I1209 08:45:54.193370 4786 scope.go:117] "RemoveContainer" containerID="2e7465beaa6a3df0b0ae7338d3035b58b92078b8d0b9646f5b59eccb66b4a505" Dec 09 08:45:54 crc kubenswrapper[4786]: I1209 08:45:54.194181 4786 scope.go:117] "RemoveContainer" containerID="93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406" Dec 09 08:45:54 crc kubenswrapper[4786]: E1209 08:45:54.194511 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-27hfj_openshift-multus(a0a865e2-8504-473d-a23f-fc682d053a9f)\"" pod="openshift-multus/multus-27hfj" podUID="a0a865e2-8504-473d-a23f-fc682d053a9f" Dec 09 08:45:55 crc kubenswrapper[4786]: E1209 08:45:55.076200 4786 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 09 08:45:55 crc kubenswrapper[4786]: I1209 08:45:55.188777 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:55 crc kubenswrapper[4786]: I1209 08:45:55.188806 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:55 crc kubenswrapper[4786]: E1209 08:45:55.193063 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:55 crc kubenswrapper[4786]: I1209 08:45:55.193145 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:55 crc kubenswrapper[4786]: E1209 08:45:55.193292 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:55 crc kubenswrapper[4786]: E1209 08:45:55.193975 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:55 crc kubenswrapper[4786]: I1209 08:45:55.198579 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/1.log" Dec 09 08:45:55 crc kubenswrapper[4786]: E1209 08:45:55.285703 4786 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 08:45:56 crc kubenswrapper[4786]: I1209 08:45:56.188153 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:56 crc kubenswrapper[4786]: E1209 08:45:56.188363 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:57 crc kubenswrapper[4786]: I1209 08:45:57.187685 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:57 crc kubenswrapper[4786]: I1209 08:45:57.187729 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:57 crc kubenswrapper[4786]: I1209 08:45:57.187836 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:57 crc kubenswrapper[4786]: E1209 08:45:57.187942 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:57 crc kubenswrapper[4786]: E1209 08:45:57.188239 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:57 crc kubenswrapper[4786]: E1209 08:45:57.188315 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:45:58 crc kubenswrapper[4786]: I1209 08:45:58.188034 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:45:58 crc kubenswrapper[4786]: E1209 08:45:58.188227 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:45:59 crc kubenswrapper[4786]: I1209 08:45:59.187343 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:45:59 crc kubenswrapper[4786]: I1209 08:45:59.187469 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:45:59 crc kubenswrapper[4786]: I1209 08:45:59.187801 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:45:59 crc kubenswrapper[4786]: E1209 08:45:59.187884 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:45:59 crc kubenswrapper[4786]: E1209 08:45:59.188002 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:45:59 crc kubenswrapper[4786]: E1209 08:45:59.188118 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:46:00 crc kubenswrapper[4786]: I1209 08:46:00.187638 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:00 crc kubenswrapper[4786]: E1209 08:46:00.187840 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:46:00 crc kubenswrapper[4786]: E1209 08:46:00.288062 4786 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 08:46:01 crc kubenswrapper[4786]: I1209 08:46:01.187843 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:01 crc kubenswrapper[4786]: E1209 08:46:01.188051 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:46:01 crc kubenswrapper[4786]: I1209 08:46:01.188377 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:46:01 crc kubenswrapper[4786]: E1209 08:46:01.188569 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:46:01 crc kubenswrapper[4786]: I1209 08:46:01.188750 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:01 crc kubenswrapper[4786]: E1209 08:46:01.189063 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:46:02 crc kubenswrapper[4786]: I1209 08:46:02.187960 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:02 crc kubenswrapper[4786]: E1209 08:46:02.188471 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:46:02 crc kubenswrapper[4786]: I1209 08:46:02.189469 4786 scope.go:117] "RemoveContainer" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:46:03 crc kubenswrapper[4786]: I1209 08:46:03.188003 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:46:03 crc kubenswrapper[4786]: I1209 08:46:03.188003 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:03 crc kubenswrapper[4786]: I1209 08:46:03.188014 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:03 crc kubenswrapper[4786]: E1209 08:46:03.188170 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:46:03 crc kubenswrapper[4786]: E1209 08:46:03.188546 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:46:03 crc kubenswrapper[4786]: E1209 08:46:03.188623 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:46:03 crc kubenswrapper[4786]: I1209 08:46:03.230349 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/3.log" Dec 09 08:46:03 crc kubenswrapper[4786]: I1209 08:46:03.233323 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerStarted","Data":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} Dec 09 08:46:03 crc kubenswrapper[4786]: I1209 08:46:03.234041 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:46:03 crc kubenswrapper[4786]: I1209 08:46:03.272034 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podStartSLOduration=106.271999809 podStartE2EDuration="1m46.271999809s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:03.269920298 +0000 UTC m=+129.153541564" watchObservedRunningTime="2025-12-09 08:46:03.271999809 +0000 UTC m=+129.155621075" Dec 09 08:46:03 crc kubenswrapper[4786]: I1209 08:46:03.396525 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v58s4"] Dec 09 08:46:03 crc kubenswrapper[4786]: I1209 08:46:03.396747 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:03 crc kubenswrapper[4786]: E1209 08:46:03.396871 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:46:04 crc kubenswrapper[4786]: I1209 08:46:04.187815 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:04 crc kubenswrapper[4786]: E1209 08:46:04.188379 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:46:05 crc kubenswrapper[4786]: I1209 08:46:05.187555 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:46:05 crc kubenswrapper[4786]: I1209 08:46:05.187578 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:05 crc kubenswrapper[4786]: I1209 08:46:05.187660 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:05 crc kubenswrapper[4786]: E1209 08:46:05.189059 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:46:05 crc kubenswrapper[4786]: E1209 08:46:05.189122 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:46:05 crc kubenswrapper[4786]: E1209 08:46:05.189209 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:46:05 crc kubenswrapper[4786]: E1209 08:46:05.288936 4786 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 08:46:06 crc kubenswrapper[4786]: I1209 08:46:06.187644 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:06 crc kubenswrapper[4786]: E1209 08:46:06.187989 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:46:06 crc kubenswrapper[4786]: I1209 08:46:06.188107 4786 scope.go:117] "RemoveContainer" containerID="93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406" Dec 09 08:46:07 crc kubenswrapper[4786]: I1209 08:46:07.188174 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:46:07 crc kubenswrapper[4786]: I1209 08:46:07.188174 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:07 crc kubenswrapper[4786]: E1209 08:46:07.189746 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:46:07 crc kubenswrapper[4786]: I1209 08:46:07.188273 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:07 crc kubenswrapper[4786]: E1209 08:46:07.194921 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:46:07 crc kubenswrapper[4786]: E1209 08:46:07.194999 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:46:07 crc kubenswrapper[4786]: I1209 08:46:07.253929 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/1.log" Dec 09 08:46:07 crc kubenswrapper[4786]: I1209 08:46:07.254007 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27hfj" event={"ID":"a0a865e2-8504-473d-a23f-fc682d053a9f","Type":"ContainerStarted","Data":"78b49bec42031aef55dfe0a03cdef74583598f5415193cadf25124687a40ac47"} Dec 09 08:46:08 crc kubenswrapper[4786]: I1209 08:46:08.186993 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:08 crc kubenswrapper[4786]: E1209 08:46:08.187138 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:46:09 crc kubenswrapper[4786]: I1209 08:46:09.187339 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:46:09 crc kubenswrapper[4786]: I1209 08:46:09.187419 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:09 crc kubenswrapper[4786]: E1209 08:46:09.188191 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 09 08:46:09 crc kubenswrapper[4786]: I1209 08:46:09.187462 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:09 crc kubenswrapper[4786]: E1209 08:46:09.188200 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 09 08:46:09 crc kubenswrapper[4786]: E1209 08:46:09.188656 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-v58s4" podUID="e6f68306-ac39-4d61-8c27-12d69cc49a4f" Dec 09 08:46:10 crc kubenswrapper[4786]: I1209 08:46:10.187197 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:10 crc kubenswrapper[4786]: E1209 08:46:10.187471 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.187117 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.187679 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.188234 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.190035 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.192893 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.192912 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.194621 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.296630 4786 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.392145 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.394290 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.395037 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.396183 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.396214 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dklgh"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.397483 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.397849 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.398439 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.398920 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.410828 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.410992 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.411092 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.411182 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.412187 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.412829 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bbqcc"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.413127 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.413956 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.414142 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.414288 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.414528 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.414602 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.414794 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.414961 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.415077 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.415170 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.415287 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.415466 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.415583 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.415979 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.417481 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hp2jj"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.417984 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.419668 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.419991 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.420267 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.424718 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.427794 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.427794 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.428608 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-sgqjs"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.429546 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.430069 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qt47g"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.431041 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.432300 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-6jxs9"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.433085 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6jxs9" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.433188 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kzf64"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.433798 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.434841 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.435162 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.438170 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.438790 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q5r5p"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.439262 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.439863 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.443395 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5hpj2"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.444045 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.444491 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.448410 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hk4xf"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.449048 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.449054 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.450727 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.450876 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451134 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451264 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451376 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451402 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451613 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451790 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451613 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451883 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451984 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.452003 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.452052 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.452114 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.452153 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.452218 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.452253 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.452326 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.451985 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.452457 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.453380 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hp2jj"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.454530 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.455523 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.455682 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.455816 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.456497 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.459877 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-config\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.459917 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpvp4\" (UniqueName: \"kubernetes.io/projected/138bf803-28c5-4a55-a0e4-48b3b2069673-kube-api-access-lpvp4\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.459943 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-trusted-ca\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.459970 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-serving-cert\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.459994 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngqg6\" (UniqueName: \"kubernetes.io/projected/25aee36a-c18c-4faf-842e-9208e234600d-kube-api-access-ngqg6\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460027 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25aee36a-c18c-4faf-842e-9208e234600d-audit-dir\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460044 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzvx\" (UniqueName: \"kubernetes.io/projected/d45771b0-7251-4e48-83dc-49322a76677c-kube-api-access-fkzvx\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460062 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6cfc59b-4750-4237-b606-0525b0188e67-auth-proxy-config\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460079 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-config\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460099 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-config\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460130 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25aee36a-c18c-4faf-842e-9208e234600d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460151 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45771b0-7251-4e48-83dc-49322a76677c-serving-cert\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460169 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138bf803-28c5-4a55-a0e4-48b3b2069673-serving-cert\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460201 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25aee36a-c18c-4faf-842e-9208e234600d-etcd-client\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460222 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25aee36a-c18c-4faf-842e-9208e234600d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460311 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-client-ca\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460337 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9db7w\" (UniqueName: \"kubernetes.io/projected/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-kube-api-access-9db7w\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460370 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681d6d06-4cbc-4fe3-8a14-860764cc25d4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6tprk\" (UID: \"681d6d06-4cbc-4fe3-8a14-860764cc25d4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460392 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f6cfc59b-4750-4237-b606-0525b0188e67-machine-approver-tls\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460408 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25aee36a-c18c-4faf-842e-9208e234600d-serving-cert\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460440 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25aee36a-c18c-4faf-842e-9208e234600d-encryption-config\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460458 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjtc\" (UniqueName: \"kubernetes.io/projected/681d6d06-4cbc-4fe3-8a14-860764cc25d4-kube-api-access-ccjtc\") pod \"openshift-apiserver-operator-796bbdcf4f-6tprk\" (UID: \"681d6d06-4cbc-4fe3-8a14-860764cc25d4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460484 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681d6d06-4cbc-4fe3-8a14-860764cc25d4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6tprk\" (UID: \"681d6d06-4cbc-4fe3-8a14-860764cc25d4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460569 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460590 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cfc59b-4750-4237-b606-0525b0188e67-config\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460637 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-client-ca\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460676 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9bzd\" (UniqueName: \"kubernetes.io/projected/f6cfc59b-4750-4237-b606-0525b0188e67-kube-api-access-s9bzd\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.460697 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25aee36a-c18c-4faf-842e-9208e234600d-audit-policies\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.461571 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.461644 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.461944 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.461987 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462134 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462138 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462264 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462294 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462454 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462466 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462513 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462563 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462616 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462454 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462767 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462771 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462864 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462916 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462940 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462995 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.462943 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.463162 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.463254 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.469794 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.494190 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.494442 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.498623 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.499005 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.499275 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.499350 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.499823 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.500076 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.500132 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.500129 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.500242 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.500314 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.500544 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.500850 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.501089 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.501374 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.501460 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.502215 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.503317 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.504321 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.504505 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.505117 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.508767 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.520641 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.536902 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sgqjs"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.537099 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.538361 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.541029 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.550827 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bbqcc"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.550910 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kzf64"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.555910 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.558028 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.559944 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qt47g"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.560856 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.560984 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561305 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561352 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-config\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561379 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpvp4\" (UniqueName: \"kubernetes.io/projected/138bf803-28c5-4a55-a0e4-48b3b2069673-kube-api-access-lpvp4\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561438 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-serving-cert\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561461 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-trusted-ca\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561485 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f011e81c-b463-4190-9ae5-73703f390ae8-node-pullsecrets\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561506 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngqg6\" (UniqueName: \"kubernetes.io/projected/25aee36a-c18c-4faf-842e-9208e234600d-kube-api-access-ngqg6\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561522 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561547 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25aee36a-c18c-4faf-842e-9208e234600d-audit-dir\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561567 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkzvx\" (UniqueName: \"kubernetes.io/projected/d45771b0-7251-4e48-83dc-49322a76677c-kube-api-access-fkzvx\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561590 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/486f2347-35fe-4d75-ae30-5074d59b49a1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561611 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6cfc59b-4750-4237-b606-0525b0188e67-auth-proxy-config\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561675 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-config\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561711 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/486f2347-35fe-4d75-ae30-5074d59b49a1-service-ca-bundle\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561744 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-config\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561800 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25aee36a-c18c-4faf-842e-9208e234600d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561829 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f011e81c-b463-4190-9ae5-73703f390ae8-audit-dir\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561855 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-config\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561871 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mg4hh"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.562387 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f6cfc59b-4750-4237-b606-0525b0188e67-auth-proxy-config\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.562458 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.562612 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/25aee36a-c18c-4faf-842e-9208e234600d-audit-dir\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.562641 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.562770 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.561876 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-audit\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563000 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45771b0-7251-4e48-83dc-49322a76677c-serving-cert\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563023 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138bf803-28c5-4a55-a0e4-48b3b2069673-serving-cert\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563047 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-image-import-ca\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563091 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25aee36a-c18c-4faf-842e-9208e234600d-etcd-client\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563119 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8vxw\" (UniqueName: \"kubernetes.io/projected/4364d199-81a4-4500-990d-9f2bcdc66186-kube-api-access-f8vxw\") pod \"downloads-7954f5f757-6jxs9\" (UID: \"4364d199-81a4-4500-990d-9f2bcdc66186\") " pod="openshift-console/downloads-7954f5f757-6jxs9" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563139 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f011e81c-b463-4190-9ae5-73703f390ae8-serving-cert\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563159 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9db7w\" (UniqueName: \"kubernetes.io/projected/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-kube-api-access-9db7w\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563190 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25aee36a-c18c-4faf-842e-9208e234600d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563215 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-client-ca\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563270 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563276 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-etcd-serving-ca\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563308 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrgg8\" (UniqueName: \"kubernetes.io/projected/f011e81c-b463-4190-9ae5-73703f390ae8-kube-api-access-vrgg8\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563339 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681d6d06-4cbc-4fe3-8a14-860764cc25d4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6tprk\" (UID: \"681d6d06-4cbc-4fe3-8a14-860764cc25d4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563368 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25aee36a-c18c-4faf-842e-9208e234600d-serving-cert\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563392 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25aee36a-c18c-4faf-842e-9208e234600d-encryption-config\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563412 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccjtc\" (UniqueName: \"kubernetes.io/projected/681d6d06-4cbc-4fe3-8a14-860764cc25d4-kube-api-access-ccjtc\") pod \"openshift-apiserver-operator-796bbdcf4f-6tprk\" (UID: \"681d6d06-4cbc-4fe3-8a14-860764cc25d4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563457 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f6cfc59b-4750-4237-b606-0525b0188e67-machine-approver-tls\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563469 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-config\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563483 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681d6d06-4cbc-4fe3-8a14-860764cc25d4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6tprk\" (UID: \"681d6d06-4cbc-4fe3-8a14-860764cc25d4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563502 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/486f2347-35fe-4d75-ae30-5074d59b49a1-config\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563517 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f011e81c-b463-4190-9ae5-73703f390ae8-etcd-client\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563536 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cfc59b-4750-4237-b606-0525b0188e67-config\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563577 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563608 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/486f2347-35fe-4d75-ae30-5074d59b49a1-serving-cert\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563636 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-client-ca\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563660 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9bzd\" (UniqueName: \"kubernetes.io/projected/f6cfc59b-4750-4237-b606-0525b0188e67-kube-api-access-s9bzd\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563680 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25aee36a-c18c-4faf-842e-9208e234600d-audit-policies\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563699 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbfh7\" (UniqueName: \"kubernetes.io/projected/486f2347-35fe-4d75-ae30-5074d59b49a1-kube-api-access-zbfh7\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563714 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f011e81c-b463-4190-9ae5-73703f390ae8-encryption-config\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.563889 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.567038 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-config\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.567441 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/25aee36a-c18c-4faf-842e-9208e234600d-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.567728 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.569072 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/25aee36a-c18c-4faf-842e-9208e234600d-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.569498 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/681d6d06-4cbc-4fe3-8a14-860764cc25d4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-6tprk\" (UID: \"681d6d06-4cbc-4fe3-8a14-860764cc25d4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.569556 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-client-ca\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.570570 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.570969 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/25aee36a-c18c-4faf-842e-9208e234600d-audit-policies\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.571046 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.571586 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.571801 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.571819 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-config\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.571987 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.572211 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6cfc59b-4750-4237-b606-0525b0188e67-config\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.573056 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/25aee36a-c18c-4faf-842e-9208e234600d-etcd-client\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.573659 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.574581 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.575187 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138bf803-28c5-4a55-a0e4-48b3b2069673-serving-cert\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.575652 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.575715 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-trusted-ca\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.576376 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-serving-cert\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.600178 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.600241 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45771b0-7251-4e48-83dc-49322a76677c-serving-cert\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.600358 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.601040 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/681d6d06-4cbc-4fe3-8a14-860764cc25d4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-6tprk\" (UID: \"681d6d06-4cbc-4fe3-8a14-860764cc25d4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.601349 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.601466 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25aee36a-c18c-4faf-842e-9208e234600d-serving-cert\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.601939 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/25aee36a-c18c-4faf-842e-9208e234600d-encryption-config\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.602023 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/f6cfc59b-4750-4237-b606-0525b0188e67-machine-approver-tls\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.602088 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.602185 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-client-ca\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.603590 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.604058 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.604298 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.604864 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.606236 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.607852 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.608284 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.609320 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.612558 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.614388 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.620818 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.621029 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.621233 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.622999 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.623491 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.624055 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.624903 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-thfng"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.625636 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.626266 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-976g7"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.627781 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.627831 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.628505 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.629178 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-vkn5t"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.633628 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.634087 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.634452 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-wp4v8"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.634628 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.634763 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.634855 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9v4k"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.634642 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.635199 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.635254 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.635731 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z2gs4"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.636401 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.638618 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dklgh"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.638937 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.639736 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.640806 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hk4xf"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.641860 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.642891 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.644639 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5hpj2"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.645059 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q5r5p"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.646306 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mg4hh"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.648169 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.649846 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.651627 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ml4fq"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.652648 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hgb5b"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.653782 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.653989 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ml4fq" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.654406 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.655200 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.656479 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.657761 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.658262 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.660000 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-thfng"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.662913 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.664255 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f011e81c-b463-4190-9ae5-73703f390ae8-audit-dir\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.664297 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-config\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.664369 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f011e81c-b463-4190-9ae5-73703f390ae8-audit-dir\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665017 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-config\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665059 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-audit\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665083 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-image-import-ca\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665114 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8vxw\" (UniqueName: \"kubernetes.io/projected/4364d199-81a4-4500-990d-9f2bcdc66186-kube-api-access-f8vxw\") pod \"downloads-7954f5f757-6jxs9\" (UID: \"4364d199-81a4-4500-990d-9f2bcdc66186\") " pod="openshift-console/downloads-7954f5f757-6jxs9" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f011e81c-b463-4190-9ae5-73703f390ae8-serving-cert\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665178 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-etcd-serving-ca\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665209 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vrgg8\" (UniqueName: \"kubernetes.io/projected/f011e81c-b463-4190-9ae5-73703f390ae8-kube-api-access-vrgg8\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665249 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/486f2347-35fe-4d75-ae30-5074d59b49a1-config\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665269 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/486f2347-35fe-4d75-ae30-5074d59b49a1-serving-cert\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665285 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f011e81c-b463-4190-9ae5-73703f390ae8-etcd-client\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665315 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbfh7\" (UniqueName: \"kubernetes.io/projected/486f2347-35fe-4d75-ae30-5074d59b49a1-kube-api-access-zbfh7\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665334 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f011e81c-b463-4190-9ae5-73703f390ae8-encryption-config\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665366 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f011e81c-b463-4190-9ae5-73703f390ae8-node-pullsecrets\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665392 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665446 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/486f2347-35fe-4d75-ae30-5074d59b49a1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665468 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/486f2347-35fe-4d75-ae30-5074d59b49a1-service-ca-bundle\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.666327 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f011e81c-b463-4190-9ae5-73703f390ae8-node-pullsecrets\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.666394 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-image-import-ca\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.665081 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-audit\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.667242 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/486f2347-35fe-4d75-ae30-5074d59b49a1-service-ca-bundle\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.667269 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-trusted-ca-bundle\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.667445 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f011e81c-b463-4190-9ae5-73703f390ae8-etcd-serving-ca\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.667667 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/486f2347-35fe-4d75-ae30-5074d59b49a1-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.668085 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/486f2347-35fe-4d75-ae30-5074d59b49a1-config\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.668727 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.669960 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f011e81c-b463-4190-9ae5-73703f390ae8-etcd-client\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.670587 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/486f2347-35fe-4d75-ae30-5074d59b49a1-serving-cert\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.670736 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f011e81c-b463-4190-9ae5-73703f390ae8-serving-cert\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.671466 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.673484 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.675524 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.675577 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.676259 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.677368 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6jxs9"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.677480 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.679826 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ml4fq"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.681333 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hgb5b"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.681506 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.682505 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9v4k"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.683043 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f011e81c-b463-4190-9ae5-73703f390ae8-encryption-config\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.683515 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-976g7"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.689576 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.700309 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.700504 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z2gs4"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.701097 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-6xmz9"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.702458 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.702577 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6xmz9"] Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.729228 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.737955 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.758695 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.833829 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngqg6\" (UniqueName: \"kubernetes.io/projected/25aee36a-c18c-4faf-842e-9208e234600d-kube-api-access-ngqg6\") pod \"apiserver-7bbb656c7d-jxwdv\" (UID: \"25aee36a-c18c-4faf-842e-9208e234600d\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.853289 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkzvx\" (UniqueName: \"kubernetes.io/projected/d45771b0-7251-4e48-83dc-49322a76677c-kube-api-access-fkzvx\") pod \"route-controller-manager-6576b87f9c-crld4\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.858496 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.878220 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.898298 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.918672 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.938861 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.959159 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.978578 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 08:46:11 crc kubenswrapper[4786]: I1209 08:46:11.999173 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.019217 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.038994 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.039020 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.088900 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpvp4\" (UniqueName: \"kubernetes.io/projected/138bf803-28c5-4a55-a0e4-48b3b2069673-kube-api-access-lpvp4\") pod \"controller-manager-879f6c89f-dklgh\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.098767 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.106902 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.108595 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccjtc\" (UniqueName: \"kubernetes.io/projected/681d6d06-4cbc-4fe3-8a14-860764cc25d4-kube-api-access-ccjtc\") pod \"openshift-apiserver-operator-796bbdcf4f-6tprk\" (UID: \"681d6d06-4cbc-4fe3-8a14-860764cc25d4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.118933 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.144217 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.168542 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9db7w\" (UniqueName: \"kubernetes.io/projected/eee2fbc3-8db7-4a8d-befa-2c692b6b41be-kube-api-access-9db7w\") pod \"console-operator-58897d9998-bbqcc\" (UID: \"eee2fbc3-8db7-4a8d-befa-2c692b6b41be\") " pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.170484 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.174874 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9bzd\" (UniqueName: \"kubernetes.io/projected/f6cfc59b-4750-4237-b606-0525b0188e67-kube-api-access-s9bzd\") pod \"machine-approver-56656f9798-qz5pt\" (UID: \"f6cfc59b-4750-4237-b606-0525b0188e67\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.178872 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.187570 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.199767 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.218501 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.239828 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.259329 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.259486 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv"] Dec 09 08:46:12 crc kubenswrapper[4786]: W1209 08:46:12.273215 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25aee36a_c18c_4faf_842e_9208e234600d.slice/crio-d14790a7aa21497ec4c6a69a717f1f914e3f151fafda8183f18b2dad3bfb4d90 WatchSource:0}: Error finding container d14790a7aa21497ec4c6a69a717f1f914e3f151fafda8183f18b2dad3bfb4d90: Status 404 returned error can't find the container with id d14790a7aa21497ec4c6a69a717f1f914e3f151fafda8183f18b2dad3bfb4d90 Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.282026 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.299068 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.319272 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.340736 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dklgh"] Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.341516 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 08:46:12 crc kubenswrapper[4786]: W1209 08:46:12.346927 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod138bf803_28c5_4a55_a0e4_48b3b2069673.slice/crio-28d8beed1bf7be28f107b8e3c9b7e8a0a88547cd8e98e5abe3dc03ed9fc6765c WatchSource:0}: Error finding container 28d8beed1bf7be28f107b8e3c9b7e8a0a88547cd8e98e5abe3dc03ed9fc6765c: Status 404 returned error can't find the container with id 28d8beed1bf7be28f107b8e3c9b7e8a0a88547cd8e98e5abe3dc03ed9fc6765c Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.350324 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.365044 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.368614 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4"] Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.382206 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 08:46:12 crc kubenswrapper[4786]: W1209 08:46:12.388949 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd45771b0_7251_4e48_83dc_49322a76677c.slice/crio-237258b77eccb606d57ccddf65914afa8c3c96a5419530fb9f8dde9d10bf8830 WatchSource:0}: Error finding container 237258b77eccb606d57ccddf65914afa8c3c96a5419530fb9f8dde9d10bf8830: Status 404 returned error can't find the container with id 237258b77eccb606d57ccddf65914afa8c3c96a5419530fb9f8dde9d10bf8830 Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.397762 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.407157 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-bbqcc"] Dec 09 08:46:12 crc kubenswrapper[4786]: W1209 08:46:12.418313 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeee2fbc3_8db7_4a8d_befa_2c692b6b41be.slice/crio-80ccd1bf0fd0d5670095b09296e1375d036af5380003462b3881f33c557ab122 WatchSource:0}: Error finding container 80ccd1bf0fd0d5670095b09296e1375d036af5380003462b3881f33c557ab122: Status 404 returned error can't find the container with id 80ccd1bf0fd0d5670095b09296e1375d036af5380003462b3881f33c557ab122 Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.420819 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.438268 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.456418 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.458165 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.478655 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.498301 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.519089 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.535147 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk"] Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.538451 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 08:46:12 crc kubenswrapper[4786]: W1209 08:46:12.544541 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod681d6d06_4cbc_4fe3_8a14_860764cc25d4.slice/crio-dc11713005807e954825b68b3514a2b84b628faaf64f20373294eb8b6489d748 WatchSource:0}: Error finding container dc11713005807e954825b68b3514a2b84b628faaf64f20373294eb8b6489d748: Status 404 returned error can't find the container with id dc11713005807e954825b68b3514a2b84b628faaf64f20373294eb8b6489d748 Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.557847 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.579194 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.601232 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.618926 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.636389 4786 request.go:700] Waited for 1.014625603s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/secrets?fieldSelector=metadata.name%3Dpprof-cert&limit=500&resourceVersion=0 Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.638792 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.659393 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.678947 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.703030 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.718011 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.739211 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.764009 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.777906 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.798679 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.819132 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.845102 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.858417 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.878761 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.898721 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.919178 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.937405 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.958870 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.978634 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 08:46:12 crc kubenswrapper[4786]: I1209 08:46:12.998991 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.018247 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.039023 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.058827 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.077953 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.098088 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.139355 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.140231 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.158952 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.179238 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.197894 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.218871 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.237969 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.259719 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.278654 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" event={"ID":"681d6d06-4cbc-4fe3-8a14-860764cc25d4","Type":"ContainerStarted","Data":"91c9d769c7cbc5bfa99189f55553dfae826a376d05f48752aedf408c45b80f86"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.278728 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" event={"ID":"681d6d06-4cbc-4fe3-8a14-860764cc25d4","Type":"ContainerStarted","Data":"dc11713005807e954825b68b3514a2b84b628faaf64f20373294eb8b6489d748"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.278669 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.280826 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bbqcc" event={"ID":"eee2fbc3-8db7-4a8d-befa-2c692b6b41be","Type":"ContainerStarted","Data":"454d497995424c4cf5420c22cde73ed389eeddd825cbd3c9f3f12e7b02497d9d"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.280860 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-bbqcc" event={"ID":"eee2fbc3-8db7-4a8d-befa-2c692b6b41be","Type":"ContainerStarted","Data":"80ccd1bf0fd0d5670095b09296e1375d036af5380003462b3881f33c557ab122"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.281445 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.283364 4786 patch_prober.go:28] interesting pod/console-operator-58897d9998-bbqcc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.283616 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bbqcc" podUID="eee2fbc3-8db7-4a8d-befa-2c692b6b41be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.283816 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" event={"ID":"138bf803-28c5-4a55-a0e4-48b3b2069673","Type":"ContainerStarted","Data":"fb2249068f4f012418db748dfca5b35346a7eb6b2d90edb87654bce7a0fbcfe5"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.283862 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" event={"ID":"138bf803-28c5-4a55-a0e4-48b3b2069673","Type":"ContainerStarted","Data":"28d8beed1bf7be28f107b8e3c9b7e8a0a88547cd8e98e5abe3dc03ed9fc6765c"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.284058 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.286815 4786 generic.go:334] "Generic (PLEG): container finished" podID="25aee36a-c18c-4faf-842e-9208e234600d" containerID="e2915be5b2b9e7394f532ccab459e9c329b6f1ede4d50ea299358e74b0192982" exitCode=0 Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.286922 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" event={"ID":"25aee36a-c18c-4faf-842e-9208e234600d","Type":"ContainerDied","Data":"e2915be5b2b9e7394f532ccab459e9c329b6f1ede4d50ea299358e74b0192982"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.286957 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" event={"ID":"25aee36a-c18c-4faf-842e-9208e234600d","Type":"ContainerStarted","Data":"d14790a7aa21497ec4c6a69a717f1f914e3f151fafda8183f18b2dad3bfb4d90"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.292311 4786 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dklgh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.292389 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" podUID="138bf803-28c5-4a55-a0e4-48b3b2069673" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.293587 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" event={"ID":"f6cfc59b-4750-4237-b606-0525b0188e67","Type":"ContainerStarted","Data":"ce5a8af0676ae2496f9de283a3047880281739951c813ada43ca2437c997491f"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.293648 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" event={"ID":"f6cfc59b-4750-4237-b606-0525b0188e67","Type":"ContainerStarted","Data":"78a3bc7b29cad8f43b7a3fd287da5f632aa67acfb105dc66d6284a0d02d5a6d2"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.293846 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" event={"ID":"f6cfc59b-4750-4237-b606-0525b0188e67","Type":"ContainerStarted","Data":"c5eaa161ad55fa67f5b44b0a38da5871b01c003b1ff735794b0a85aa8a506e19"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.295416 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" event={"ID":"d45771b0-7251-4e48-83dc-49322a76677c","Type":"ContainerStarted","Data":"7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.295479 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" event={"ID":"d45771b0-7251-4e48-83dc-49322a76677c","Type":"ContainerStarted","Data":"237258b77eccb606d57ccddf65914afa8c3c96a5419530fb9f8dde9d10bf8830"} Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.295798 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.298716 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.298850 4786 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-crld4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.298925 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" podUID="d45771b0-7251-4e48-83dc-49322a76677c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.319213 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.346168 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.358750 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.380951 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.402634 4786 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.418731 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.438713 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.458700 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.479087 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.498471 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.542633 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbfh7\" (UniqueName: \"kubernetes.io/projected/486f2347-35fe-4d75-ae30-5074d59b49a1-kube-api-access-zbfh7\") pod \"authentication-operator-69f744f599-kzf64\" (UID: \"486f2347-35fe-4d75-ae30-5074d59b49a1\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.564369 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vrgg8\" (UniqueName: \"kubernetes.io/projected/f011e81c-b463-4190-9ae5-73703f390ae8-kube-api-access-vrgg8\") pod \"apiserver-76f77b778f-qt47g\" (UID: \"f011e81c-b463-4190-9ae5-73703f390ae8\") " pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.596628 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.598644 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.608737 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8vxw\" (UniqueName: \"kubernetes.io/projected/4364d199-81a4-4500-990d-9f2bcdc66186-kube-api-access-f8vxw\") pod \"downloads-7954f5f757-6jxs9\" (UID: \"4364d199-81a4-4500-990d-9f2bcdc66186\") " pod="openshift-console/downloads-7954f5f757-6jxs9" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.620170 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.636938 4786 request.go:700] Waited for 1.856116141s due to client-side throttling, not priority and fairness, request: PATCH:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-config-operator/pods/openshift-config-operator-7777fb866f-2zrxb/status Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.718788 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.718788 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.756744 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.762188 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-6jxs9" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.772235 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.806316 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-bound-sa-token\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.806391 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lszt\" (UniqueName: \"kubernetes.io/projected/bbd9538f-43ff-4c20-80ab-dcf783b7a558-kube-api-access-7lszt\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.806461 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/90e1876d-da75-48e1-b63a-a084de277f84-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.806514 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-trusted-ca\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.806572 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8101c95a-1629-4b71-b12e-0fa374c9b09a-images\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.806598 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.806643 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.806795 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8101c95a-1629-4b71-b12e-0fa374c9b09a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807015 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807055 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/904f5e29-a237-4ddc-b97a-ceb88f179f6b-metrics-tls\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807132 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-oauth-config\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807214 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-etcd-ca\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807310 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-config\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807340 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807392 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/742a103a-06e2-4d52-8c04-54681052838d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807449 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-serving-cert\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807483 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-policies\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807542 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-config\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807593 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-etcd-client\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807636 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.807676 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d905424-2cf5-4338-9d16-91487e12cad3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4zfn7\" (UID: \"0d905424-2cf5-4338-9d16-91487e12cad3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808074 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8pd\" (UniqueName: \"kubernetes.io/projected/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-kube-api-access-dp8pd\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808605 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-registry-certificates\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808633 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hghgp\" (UniqueName: \"kubernetes.io/projected/8101c95a-1629-4b71-b12e-0fa374c9b09a-kube-api-access-hghgp\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808658 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-etcd-service-ca\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808691 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89dc04b8-a46f-4df1-afde-f3d4d4b169ef-serving-cert\") pod \"openshift-config-operator-7777fb866f-2zrxb\" (UID: \"89dc04b8-a46f-4df1-afde-f3d4d4b169ef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808709 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/89dc04b8-a46f-4df1-afde-f3d4d4b169ef-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2zrxb\" (UID: \"89dc04b8-a46f-4df1-afde-f3d4d4b169ef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808744 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8775t\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-kube-api-access-8775t\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808807 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-service-ca\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808834 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/904f5e29-a237-4ddc-b97a-ceb88f179f6b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808897 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/742a103a-06e2-4d52-8c04-54681052838d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808930 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnqkr\" (UniqueName: \"kubernetes.io/projected/89dc04b8-a46f-4df1-afde-f3d4d4b169ef-kube-api-access-qnqkr\") pod \"openshift-config-operator-7777fb866f-2zrxb\" (UID: \"89dc04b8-a46f-4df1-afde-f3d4d4b169ef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808952 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtw8z\" (UniqueName: \"kubernetes.io/projected/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-kube-api-access-rtw8z\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808976 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-trusted-ca-bundle\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.808999 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-dir\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.809046 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.809084 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bmjm\" (UniqueName: \"kubernetes.io/projected/904f5e29-a237-4ddc-b97a-ceb88f179f6b-kube-api-access-8bmjm\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.809143 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-oauth-serving-cert\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.809218 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90e1876d-da75-48e1-b63a-a084de277f84-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.809244 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.809265 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.809391 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8101c95a-1629-4b71-b12e-0fa374c9b09a-config\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.810608 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.810712 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d905424-2cf5-4338-9d16-91487e12cad3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4zfn7\" (UID: \"0d905424-2cf5-4338-9d16-91487e12cad3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.810877 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.810922 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.810969 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-serving-cert\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.811000 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90e1876d-da75-48e1-b63a-a084de277f84-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: E1209 08:46:13.811112 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.311088321 +0000 UTC m=+140.194709547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.811147 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrj7h\" (UniqueName: \"kubernetes.io/projected/90e1876d-da75-48e1-b63a-a084de277f84-kube-api-access-mrj7h\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.811492 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.811538 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/904f5e29-a237-4ddc-b97a-ceb88f179f6b-trusted-ca\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.811651 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-registry-tls\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.811705 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbfn\" (UniqueName: \"kubernetes.io/projected/0d905424-2cf5-4338-9d16-91487e12cad3-kube-api-access-nvbfn\") pod \"openshift-controller-manager-operator-756b6f6bc6-4zfn7\" (UID: \"0d905424-2cf5-4338-9d16-91487e12cad3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913236 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913520 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e47b7b3-37f6-4a49-8080-27304934e01d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913559 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbfn\" (UniqueName: \"kubernetes.io/projected/0d905424-2cf5-4338-9d16-91487e12cad3-kube-api-access-nvbfn\") pod \"openshift-controller-manager-operator-756b6f6bc6-4zfn7\" (UID: \"0d905424-2cf5-4338-9d16-91487e12cad3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913582 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvg66\" (UniqueName: \"kubernetes.io/projected/2d491160-93fe-409b-b488-40f6dc0d4ba3-kube-api-access-rvg66\") pod \"service-ca-9c57cc56f-c9v4k\" (UID: \"2d491160-93fe-409b-b488-40f6dc0d4ba3\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913622 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-registry-tls\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913653 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-bound-sa-token\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913675 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23576d5f-707f-48a1-8db4-cddfd1c0e754-proxy-tls\") pod \"machine-config-controller-84d6567774-2mdnq\" (UID: \"23576d5f-707f-48a1-8db4-cddfd1c0e754\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913700 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2d491160-93fe-409b-b488-40f6dc0d4ba3-signing-cabundle\") pod \"service-ca-9c57cc56f-c9v4k\" (UID: \"2d491160-93fe-409b-b488-40f6dc0d4ba3\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913721 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e47b7b3-37f6-4a49-8080-27304934e01d-proxy-tls\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913743 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913767 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8101c95a-1629-4b71-b12e-0fa374c9b09a-images\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913789 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf80da97-b492-4457-a51a-3b4474436625-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zhmp\" (UID: \"bf80da97-b492-4457-a51a-3b4474436625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913820 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/904f5e29-a237-4ddc-b97a-ceb88f179f6b-metrics-tls\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913840 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a602c774-b5c1-4a4e-9aa8-2952c932346e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z2gs4\" (UID: \"a602c774-b5c1-4a4e-9aa8-2952c932346e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913862 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b128968a-caa6-46be-be15-79971a310e5c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-thfng\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913887 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-etcd-ca\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913911 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/073a1588-07ff-438c-9f39-468ee8606c52-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d4nsz\" (UID: \"073a1588-07ff-438c-9f39-468ee8606c52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913930 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33c3f956-9e12-43ef-91b9-565bbec4ae50-certs\") pod \"machine-config-server-vkn5t\" (UID: \"33c3f956-9e12-43ef-91b9-565bbec4ae50\") " pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913952 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8z2n\" (UniqueName: \"kubernetes.io/projected/ee356cb2-0c94-4402-be8c-e6895f39de08-kube-api-access-t8z2n\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.913973 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/924456b2-4aa3-4e7c-8d80-667783b96551-secret-volume\") pod \"collect-profiles-29421165-4fkk9\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914005 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf80da97-b492-4457-a51a-3b4474436625-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zhmp\" (UID: \"bf80da97-b492-4457-a51a-3b4474436625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914028 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-plugins-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914047 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b128968a-caa6-46be-be15-79971a310e5c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-thfng\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914070 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-config\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914090 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-registration-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914110 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-serving-cert\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914133 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-policies\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914156 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d83fbdd0-b9fa-4a1f-ae43-929109b5cb30-profile-collector-cert\") pod \"catalog-operator-68c6474976-hprgt\" (UID: \"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914179 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff911481-36aa-4c42-9139-8fdb2e1e255f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f277g\" (UID: \"ff911481-36aa-4c42-9139-8fdb2e1e255f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914213 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp8pd\" (UniqueName: \"kubernetes.io/projected/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-kube-api-access-dp8pd\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914233 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073a1588-07ff-438c-9f39-468ee8606c52-config\") pod \"kube-controller-manager-operator-78b949d7b-d4nsz\" (UID: \"073a1588-07ff-438c-9f39-468ee8606c52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914254 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hln79\" (UniqueName: \"kubernetes.io/projected/ad0a258a-10ff-4783-87dc-0718a2b1e224-kube-api-access-hln79\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914290 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hghgp\" (UniqueName: \"kubernetes.io/projected/8101c95a-1629-4b71-b12e-0fa374c9b09a-kube-api-access-hghgp\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914319 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-etcd-service-ca\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914343 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qgz9\" (UniqueName: \"kubernetes.io/projected/3548b663-fb10-488a-bb92-02388996febd-kube-api-access-2qgz9\") pod \"cluster-samples-operator-665b6dd947-rkhzg\" (UID: \"3548b663-fb10-488a-bb92-02388996febd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914366 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf80da97-b492-4457-a51a-3b4474436625-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zhmp\" (UID: \"bf80da97-b492-4457-a51a-3b4474436625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914388 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-service-ca\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914410 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfdzn\" (UniqueName: \"kubernetes.io/projected/33c3f956-9e12-43ef-91b9-565bbec4ae50-kube-api-access-jfdzn\") pod \"machine-config-server-vkn5t\" (UID: \"33c3f956-9e12-43ef-91b9-565bbec4ae50\") " pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914464 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-dir\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914506 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwmkh\" (UniqueName: \"kubernetes.io/projected/d4d6a434-97e8-469a-9e85-9bbc494e8918-kube-api-access-wwmkh\") pod \"service-ca-operator-777779d784-976g7\" (UID: \"d4d6a434-97e8-469a-9e85-9bbc494e8918\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914526 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee356cb2-0c94-4402-be8c-e6895f39de08-service-ca-bundle\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914592 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-oauth-serving-cert\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914647 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-csi-data-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914672 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzhc2\" (UniqueName: \"kubernetes.io/projected/a602c774-b5c1-4a4e-9aa8-2952c932346e-kube-api-access-lzhc2\") pod \"multus-admission-controller-857f4d67dd-z2gs4\" (UID: \"a602c774-b5c1-4a4e-9aa8-2952c932346e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914717 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqx82\" (UniqueName: \"kubernetes.io/projected/23055283-72c9-4aa6-8336-a77de105a7f6-kube-api-access-dqx82\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9n44\" (UID: \"23055283-72c9-4aa6-8336-a77de105a7f6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914750 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8101c95a-1629-4b71-b12e-0fa374c9b09a-config\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914770 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23055283-72c9-4aa6-8336-a77de105a7f6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9n44\" (UID: \"23055283-72c9-4aa6-8336-a77de105a7f6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914794 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad0a258a-10ff-4783-87dc-0718a2b1e224-webhook-cert\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914818 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kqsh\" (UniqueName: \"kubernetes.io/projected/f454d7d8-776e-4070-8f46-4d9b954fc5c1-kube-api-access-6kqsh\") pod \"dns-operator-744455d44c-mg4hh\" (UID: \"f454d7d8-776e-4070-8f46-4d9b954fc5c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914844 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914876 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914916 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d83fbdd0-b9fa-4a1f-ae43-929109b5cb30-srv-cert\") pod \"catalog-operator-68c6474976-hprgt\" (UID: \"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914946 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9dkm\" (UniqueName: \"kubernetes.io/projected/d83fbdd0-b9fa-4a1f-ae43-929109b5cb30-kube-api-access-s9dkm\") pod \"catalog-operator-68c6474976-hprgt\" (UID: \"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.914988 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl862\" (UniqueName: \"kubernetes.io/projected/ce7b8eb5-3cc7-4de8-921e-5249b393ec93-kube-api-access-fl862\") pod \"dns-default-6xmz9\" (UID: \"ce7b8eb5-3cc7-4de8-921e-5249b393ec93\") " pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915048 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lszt\" (UniqueName: \"kubernetes.io/projected/bbd9538f-43ff-4c20-80ab-dcf783b7a558-kube-api-access-7lszt\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915081 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/90e1876d-da75-48e1-b63a-a084de277f84-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915112 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce7b8eb5-3cc7-4de8-921e-5249b393ec93-metrics-tls\") pod \"dns-default-6xmz9\" (UID: \"ce7b8eb5-3cc7-4de8-921e-5249b393ec93\") " pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915141 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-trusted-ca\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915167 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915189 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42fb68ba-836b-45d9-b680-15b7757c08ec-srv-cert\") pod \"olm-operator-6b444d44fb-hjqvk\" (UID: \"42fb68ba-836b-45d9-b680-15b7757c08ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915214 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hplp9\" (UniqueName: \"kubernetes.io/projected/4f053962-44b1-4d42-9452-aa5a80eb1e18-kube-api-access-hplp9\") pod \"package-server-manager-789f6589d5-6j88p\" (UID: \"4f053962-44b1-4d42-9452-aa5a80eb1e18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915252 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42fb68ba-836b-45d9-b680-15b7757c08ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjqvk\" (UID: \"42fb68ba-836b-45d9-b680-15b7757c08ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915277 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8101c95a-1629-4b71-b12e-0fa374c9b09a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915302 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915338 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-oauth-config\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915359 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee356cb2-0c94-4402-be8c-e6895f39de08-stats-auth\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915380 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f053962-44b1-4d42-9452-aa5a80eb1e18-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6j88p\" (UID: \"4f053962-44b1-4d42-9452-aa5a80eb1e18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915402 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d6a434-97e8-469a-9e85-9bbc494e8918-serving-cert\") pod \"service-ca-operator-777779d784-976g7\" (UID: \"d4d6a434-97e8-469a-9e85-9bbc494e8918\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915449 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915483 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/742a103a-06e2-4d52-8c04-54681052838d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915505 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-mountpoint-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915535 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q69gw\" (UniqueName: \"kubernetes.io/projected/b128968a-caa6-46be-be15-79971a310e5c-kube-api-access-q69gw\") pod \"marketplace-operator-79b997595-thfng\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915566 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-config\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915594 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915631 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff911481-36aa-4c42-9139-8fdb2e1e255f-config\") pod \"kube-apiserver-operator-766d6c64bb-f277g\" (UID: \"ff911481-36aa-4c42-9139-8fdb2e1e255f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915657 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ad0a258a-10ff-4783-87dc-0718a2b1e224-tmpfs\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915679 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-etcd-client\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915712 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d905424-2cf5-4338-9d16-91487e12cad3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4zfn7\" (UID: \"0d905424-2cf5-4338-9d16-91487e12cad3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915733 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23055283-72c9-4aa6-8336-a77de105a7f6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9n44\" (UID: \"23055283-72c9-4aa6-8336-a77de105a7f6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915757 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7kr\" (UniqueName: \"kubernetes.io/projected/23576d5f-707f-48a1-8db4-cddfd1c0e754-kube-api-access-mw7kr\") pod \"machine-config-controller-84d6567774-2mdnq\" (UID: \"23576d5f-707f-48a1-8db4-cddfd1c0e754\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915778 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/924456b2-4aa3-4e7c-8d80-667783b96551-config-volume\") pod \"collect-profiles-29421165-4fkk9\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915800 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff911481-36aa-4c42-9139-8fdb2e1e255f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f277g\" (UID: \"ff911481-36aa-4c42-9139-8fdb2e1e255f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915820 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6e47b7b3-37f6-4a49-8080-27304934e01d-images\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915866 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33c3f956-9e12-43ef-91b9-565bbec4ae50-node-bootstrap-token\") pod \"machine-config-server-vkn5t\" (UID: \"33c3f956-9e12-43ef-91b9-565bbec4ae50\") " pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915888 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2d491160-93fe-409b-b488-40f6dc0d4ba3-signing-key\") pod \"service-ca-9c57cc56f-c9v4k\" (UID: \"2d491160-93fe-409b-b488-40f6dc0d4ba3\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915913 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-registry-certificates\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915933 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88-cert\") pod \"ingress-canary-ml4fq\" (UID: \"766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88\") " pod="openshift-ingress-canary/ingress-canary-ml4fq" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915952 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad0a258a-10ff-4783-87dc-0718a2b1e224-apiservice-cert\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.915975 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g4rk\" (UniqueName: \"kubernetes.io/projected/924456b2-4aa3-4e7c-8d80-667783b96551-kube-api-access-8g4rk\") pod \"collect-profiles-29421165-4fkk9\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916275 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89dc04b8-a46f-4df1-afde-f3d4d4b169ef-serving-cert\") pod \"openshift-config-operator-7777fb866f-2zrxb\" (UID: \"89dc04b8-a46f-4df1-afde-f3d4d4b169ef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916306 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/89dc04b8-a46f-4df1-afde-f3d4d4b169ef-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2zrxb\" (UID: \"89dc04b8-a46f-4df1-afde-f3d4d4b169ef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916331 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92zfd\" (UniqueName: \"kubernetes.io/projected/667ac238-96a3-4f57-b308-d4d5693d40f2-kube-api-access-92zfd\") pod \"control-plane-machine-set-operator-78cbb6b69f-wnjtx\" (UID: \"667ac238-96a3-4f57-b308-d4d5693d40f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916356 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8775t\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-kube-api-access-8775t\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916395 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/904f5e29-a237-4ddc-b97a-ceb88f179f6b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916418 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f454d7d8-776e-4070-8f46-4d9b954fc5c1-metrics-tls\") pod \"dns-operator-744455d44c-mg4hh\" (UID: \"f454d7d8-776e-4070-8f46-4d9b954fc5c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916462 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/667ac238-96a3-4f57-b308-d4d5693d40f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wnjtx\" (UID: \"667ac238-96a3-4f57-b308-d4d5693d40f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916502 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/742a103a-06e2-4d52-8c04-54681052838d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916525 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnqkr\" (UniqueName: \"kubernetes.io/projected/89dc04b8-a46f-4df1-afde-f3d4d4b169ef-kube-api-access-qnqkr\") pod \"openshift-config-operator-7777fb866f-2zrxb\" (UID: \"89dc04b8-a46f-4df1-afde-f3d4d4b169ef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916559 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtw8z\" (UniqueName: \"kubernetes.io/projected/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-kube-api-access-rtw8z\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916582 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/073a1588-07ff-438c-9f39-468ee8606c52-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d4nsz\" (UID: \"073a1588-07ff-438c-9f39-468ee8606c52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916605 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccllw\" (UniqueName: \"kubernetes.io/projected/42fb68ba-836b-45d9-b680-15b7757c08ec-kube-api-access-ccllw\") pod \"olm-operator-6b444d44fb-hjqvk\" (UID: \"42fb68ba-836b-45d9-b680-15b7757c08ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916630 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee356cb2-0c94-4402-be8c-e6895f39de08-default-certificate\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916653 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-trusted-ca-bundle\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916676 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916712 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bmjm\" (UniqueName: \"kubernetes.io/projected/904f5e29-a237-4ddc-b97a-ceb88f179f6b-kube-api-access-8bmjm\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916737 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d6a434-97e8-469a-9e85-9bbc494e8918-config\") pod \"service-ca-operator-777779d784-976g7\" (UID: \"d4d6a434-97e8-469a-9e85-9bbc494e8918\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916784 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-socket-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916824 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916846 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916871 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23576d5f-707f-48a1-8db4-cddfd1c0e754-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2mdnq\" (UID: \"23576d5f-707f-48a1-8db4-cddfd1c0e754\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916895 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9btm\" (UniqueName: \"kubernetes.io/projected/8da3ea4c-9921-4dc1-b63f-474753db5eb0-kube-api-access-g9btm\") pod \"migrator-59844c95c7-h4jkc\" (UID: \"8da3ea4c-9921-4dc1-b63f-474753db5eb0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916931 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90e1876d-da75-48e1-b63a-a084de277f84-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.916981 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gzz7\" (UniqueName: \"kubernetes.io/projected/891665dd-9904-4246-86c8-fabead4c8606-kube-api-access-2gzz7\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917006 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce7b8eb5-3cc7-4de8-921e-5249b393ec93-config-volume\") pod \"dns-default-6xmz9\" (UID: \"ce7b8eb5-3cc7-4de8-921e-5249b393ec93\") " pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917029 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3548b663-fb10-488a-bb92-02388996febd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rkhzg\" (UID: \"3548b663-fb10-488a-bb92-02388996febd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917064 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee356cb2-0c94-4402-be8c-e6895f39de08-metrics-certs\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917087 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm449\" (UniqueName: \"kubernetes.io/projected/6e47b7b3-37f6-4a49-8080-27304934e01d-kube-api-access-bm449\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917110 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75smg\" (UniqueName: \"kubernetes.io/projected/766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88-kube-api-access-75smg\") pod \"ingress-canary-ml4fq\" (UID: \"766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88\") " pod="openshift-ingress-canary/ingress-canary-ml4fq" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917135 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d905424-2cf5-4338-9d16-91487e12cad3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4zfn7\" (UID: \"0d905424-2cf5-4338-9d16-91487e12cad3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917171 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-serving-cert\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917193 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90e1876d-da75-48e1-b63a-a084de277f84-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917216 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrj7h\" (UniqueName: \"kubernetes.io/projected/90e1876d-da75-48e1-b63a-a084de277f84-kube-api-access-mrj7h\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917240 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.917276 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/904f5e29-a237-4ddc-b97a-ceb88f179f6b-trusted-ca\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.918743 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/904f5e29-a237-4ddc-b97a-ceb88f179f6b-trusted-ca\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:13 crc kubenswrapper[4786]: E1209 08:46:13.918839 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.418821488 +0000 UTC m=+140.302442724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.927685 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-service-ca\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.927967 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-dir\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.928797 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/90e1876d-da75-48e1-b63a-a084de277f84-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.929296 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-serving-cert\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.929703 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-oauth-serving-cert\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.931891 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.933289 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/8101c95a-1629-4b71-b12e-0fa374c9b09a-images\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.933642 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.934303 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8101c95a-1629-4b71-b12e-0fa374c9b09a-config\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.941045 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/90e1876d-da75-48e1-b63a-a084de277f84-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.944326 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.944977 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.945996 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-trusted-ca-bundle\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.946791 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-trusted-ca\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.947415 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.948771 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-policies\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.949877 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/742a103a-06e2-4d52-8c04-54681052838d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.950674 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-config\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.953206 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d905424-2cf5-4338-9d16-91487e12cad3-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-4zfn7\" (UID: \"0d905424-2cf5-4338-9d16-91487e12cad3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.954922 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/89dc04b8-a46f-4df1-afde-f3d4d4b169ef-available-featuregates\") pod \"openshift-config-operator-7777fb866f-2zrxb\" (UID: \"89dc04b8-a46f-4df1-afde-f3d4d4b169ef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.955543 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.956705 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89dc04b8-a46f-4df1-afde-f3d4d4b169ef-serving-cert\") pod \"openshift-config-operator-7777fb866f-2zrxb\" (UID: \"89dc04b8-a46f-4df1-afde-f3d4d4b169ef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.958023 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-registry-certificates\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.959861 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbfn\" (UniqueName: \"kubernetes.io/projected/0d905424-2cf5-4338-9d16-91487e12cad3-kube-api-access-nvbfn\") pod \"openshift-controller-manager-operator-756b6f6bc6-4zfn7\" (UID: \"0d905424-2cf5-4338-9d16-91487e12cad3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:13 crc kubenswrapper[4786]: I1209 08:46:13.997207 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-config\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025484 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hln79\" (UniqueName: \"kubernetes.io/projected/ad0a258a-10ff-4783-87dc-0718a2b1e224-kube-api-access-hln79\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025577 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qgz9\" (UniqueName: \"kubernetes.io/projected/3548b663-fb10-488a-bb92-02388996febd-kube-api-access-2qgz9\") pod \"cluster-samples-operator-665b6dd947-rkhzg\" (UID: \"3548b663-fb10-488a-bb92-02388996febd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025612 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf80da97-b492-4457-a51a-3b4474436625-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zhmp\" (UID: \"bf80da97-b492-4457-a51a-3b4474436625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025640 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfdzn\" (UniqueName: \"kubernetes.io/projected/33c3f956-9e12-43ef-91b9-565bbec4ae50-kube-api-access-jfdzn\") pod \"machine-config-server-vkn5t\" (UID: \"33c3f956-9e12-43ef-91b9-565bbec4ae50\") " pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025667 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwmkh\" (UniqueName: \"kubernetes.io/projected/d4d6a434-97e8-469a-9e85-9bbc494e8918-kube-api-access-wwmkh\") pod \"service-ca-operator-777779d784-976g7\" (UID: \"d4d6a434-97e8-469a-9e85-9bbc494e8918\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025690 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee356cb2-0c94-4402-be8c-e6895f39de08-service-ca-bundle\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025714 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-csi-data-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025736 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzhc2\" (UniqueName: \"kubernetes.io/projected/a602c774-b5c1-4a4e-9aa8-2952c932346e-kube-api-access-lzhc2\") pod \"multus-admission-controller-857f4d67dd-z2gs4\" (UID: \"a602c774-b5c1-4a4e-9aa8-2952c932346e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025774 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqx82\" (UniqueName: \"kubernetes.io/projected/23055283-72c9-4aa6-8336-a77de105a7f6-kube-api-access-dqx82\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9n44\" (UID: \"23055283-72c9-4aa6-8336-a77de105a7f6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025793 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad0a258a-10ff-4783-87dc-0718a2b1e224-webhook-cert\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025812 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23055283-72c9-4aa6-8336-a77de105a7f6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9n44\" (UID: \"23055283-72c9-4aa6-8336-a77de105a7f6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025831 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kqsh\" (UniqueName: \"kubernetes.io/projected/f454d7d8-776e-4070-8f46-4d9b954fc5c1-kube-api-access-6kqsh\") pod \"dns-operator-744455d44c-mg4hh\" (UID: \"f454d7d8-776e-4070-8f46-4d9b954fc5c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025861 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025886 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d83fbdd0-b9fa-4a1f-ae43-929109b5cb30-srv-cert\") pod \"catalog-operator-68c6474976-hprgt\" (UID: \"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025911 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9dkm\" (UniqueName: \"kubernetes.io/projected/d83fbdd0-b9fa-4a1f-ae43-929109b5cb30-kube-api-access-s9dkm\") pod \"catalog-operator-68c6474976-hprgt\" (UID: \"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025935 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl862\" (UniqueName: \"kubernetes.io/projected/ce7b8eb5-3cc7-4de8-921e-5249b393ec93-kube-api-access-fl862\") pod \"dns-default-6xmz9\" (UID: \"ce7b8eb5-3cc7-4de8-921e-5249b393ec93\") " pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025958 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce7b8eb5-3cc7-4de8-921e-5249b393ec93-metrics-tls\") pod \"dns-default-6xmz9\" (UID: \"ce7b8eb5-3cc7-4de8-921e-5249b393ec93\") " pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025981 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42fb68ba-836b-45d9-b680-15b7757c08ec-srv-cert\") pod \"olm-operator-6b444d44fb-hjqvk\" (UID: \"42fb68ba-836b-45d9-b680-15b7757c08ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.025997 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hplp9\" (UniqueName: \"kubernetes.io/projected/4f053962-44b1-4d42-9452-aa5a80eb1e18-kube-api-access-hplp9\") pod \"package-server-manager-789f6589d5-6j88p\" (UID: \"4f053962-44b1-4d42-9452-aa5a80eb1e18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026023 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42fb68ba-836b-45d9-b680-15b7757c08ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjqvk\" (UID: \"42fb68ba-836b-45d9-b680-15b7757c08ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026048 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f053962-44b1-4d42-9452-aa5a80eb1e18-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6j88p\" (UID: \"4f053962-44b1-4d42-9452-aa5a80eb1e18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026066 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d6a434-97e8-469a-9e85-9bbc494e8918-serving-cert\") pod \"service-ca-operator-777779d784-976g7\" (UID: \"d4d6a434-97e8-469a-9e85-9bbc494e8918\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026082 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee356cb2-0c94-4402-be8c-e6895f39de08-stats-auth\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026131 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q69gw\" (UniqueName: \"kubernetes.io/projected/b128968a-caa6-46be-be15-79971a310e5c-kube-api-access-q69gw\") pod \"marketplace-operator-79b997595-thfng\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026153 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-mountpoint-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026175 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff911481-36aa-4c42-9139-8fdb2e1e255f-config\") pod \"kube-apiserver-operator-766d6c64bb-f277g\" (UID: \"ff911481-36aa-4c42-9139-8fdb2e1e255f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026191 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ad0a258a-10ff-4783-87dc-0718a2b1e224-tmpfs\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026206 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23055283-72c9-4aa6-8336-a77de105a7f6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9n44\" (UID: \"23055283-72c9-4aa6-8336-a77de105a7f6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026226 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7kr\" (UniqueName: \"kubernetes.io/projected/23576d5f-707f-48a1-8db4-cddfd1c0e754-kube-api-access-mw7kr\") pod \"machine-config-controller-84d6567774-2mdnq\" (UID: \"23576d5f-707f-48a1-8db4-cddfd1c0e754\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026247 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/924456b2-4aa3-4e7c-8d80-667783b96551-config-volume\") pod \"collect-profiles-29421165-4fkk9\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026267 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6e47b7b3-37f6-4a49-8080-27304934e01d-images\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026285 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff911481-36aa-4c42-9139-8fdb2e1e255f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f277g\" (UID: \"ff911481-36aa-4c42-9139-8fdb2e1e255f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026303 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33c3f956-9e12-43ef-91b9-565bbec4ae50-node-bootstrap-token\") pod \"machine-config-server-vkn5t\" (UID: \"33c3f956-9e12-43ef-91b9-565bbec4ae50\") " pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026318 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2d491160-93fe-409b-b488-40f6dc0d4ba3-signing-key\") pod \"service-ca-9c57cc56f-c9v4k\" (UID: \"2d491160-93fe-409b-b488-40f6dc0d4ba3\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026345 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88-cert\") pod \"ingress-canary-ml4fq\" (UID: \"766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88\") " pod="openshift-ingress-canary/ingress-canary-ml4fq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026359 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad0a258a-10ff-4783-87dc-0718a2b1e224-apiservice-cert\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.026388 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g4rk\" (UniqueName: \"kubernetes.io/projected/924456b2-4aa3-4e7c-8d80-667783b96551-kube-api-access-8g4rk\") pod \"collect-profiles-29421165-4fkk9\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.028894 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92zfd\" (UniqueName: \"kubernetes.io/projected/667ac238-96a3-4f57-b308-d4d5693d40f2-kube-api-access-92zfd\") pod \"control-plane-machine-set-operator-78cbb6b69f-wnjtx\" (UID: \"667ac238-96a3-4f57-b308-d4d5693d40f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029001 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f454d7d8-776e-4070-8f46-4d9b954fc5c1-metrics-tls\") pod \"dns-operator-744455d44c-mg4hh\" (UID: \"f454d7d8-776e-4070-8f46-4d9b954fc5c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029079 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-oauth-config\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029080 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/667ac238-96a3-4f57-b308-d4d5693d40f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wnjtx\" (UID: \"667ac238-96a3-4f57-b308-d4d5693d40f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029185 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee356cb2-0c94-4402-be8c-e6895f39de08-default-certificate\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029219 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/073a1588-07ff-438c-9f39-468ee8606c52-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d4nsz\" (UID: \"073a1588-07ff-438c-9f39-468ee8606c52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029237 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccllw\" (UniqueName: \"kubernetes.io/projected/42fb68ba-836b-45d9-b680-15b7757c08ec-kube-api-access-ccllw\") pod \"olm-operator-6b444d44fb-hjqvk\" (UID: \"42fb68ba-836b-45d9-b680-15b7757c08ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029254 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d6a434-97e8-469a-9e85-9bbc494e8918-config\") pod \"service-ca-operator-777779d784-976g7\" (UID: \"d4d6a434-97e8-469a-9e85-9bbc494e8918\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029281 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-socket-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029314 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9btm\" (UniqueName: \"kubernetes.io/projected/8da3ea4c-9921-4dc1-b63f-474753db5eb0-kube-api-access-g9btm\") pod \"migrator-59844c95c7-h4jkc\" (UID: \"8da3ea4c-9921-4dc1-b63f-474753db5eb0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029345 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23576d5f-707f-48a1-8db4-cddfd1c0e754-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2mdnq\" (UID: \"23576d5f-707f-48a1-8db4-cddfd1c0e754\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029378 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gzz7\" (UniqueName: \"kubernetes.io/projected/891665dd-9904-4246-86c8-fabead4c8606-kube-api-access-2gzz7\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029405 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce7b8eb5-3cc7-4de8-921e-5249b393ec93-config-volume\") pod \"dns-default-6xmz9\" (UID: \"ce7b8eb5-3cc7-4de8-921e-5249b393ec93\") " pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029457 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3548b663-fb10-488a-bb92-02388996febd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rkhzg\" (UID: \"3548b663-fb10-488a-bb92-02388996febd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029481 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee356cb2-0c94-4402-be8c-e6895f39de08-metrics-certs\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029500 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm449\" (UniqueName: \"kubernetes.io/projected/6e47b7b3-37f6-4a49-8080-27304934e01d-kube-api-access-bm449\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029518 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75smg\" (UniqueName: \"kubernetes.io/projected/766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88-kube-api-access-75smg\") pod \"ingress-canary-ml4fq\" (UID: \"766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88\") " pod="openshift-ingress-canary/ingress-canary-ml4fq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029556 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e47b7b3-37f6-4a49-8080-27304934e01d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029579 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvg66\" (UniqueName: \"kubernetes.io/projected/2d491160-93fe-409b-b488-40f6dc0d4ba3-kube-api-access-rvg66\") pod \"service-ca-9c57cc56f-c9v4k\" (UID: \"2d491160-93fe-409b-b488-40f6dc0d4ba3\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029594 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2d491160-93fe-409b-b488-40f6dc0d4ba3-signing-cabundle\") pod \"service-ca-9c57cc56f-c9v4k\" (UID: \"2d491160-93fe-409b-b488-40f6dc0d4ba3\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029618 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23576d5f-707f-48a1-8db4-cddfd1c0e754-proxy-tls\") pod \"machine-config-controller-84d6567774-2mdnq\" (UID: \"23576d5f-707f-48a1-8db4-cddfd1c0e754\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029634 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e47b7b3-37f6-4a49-8080-27304934e01d-proxy-tls\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029657 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf80da97-b492-4457-a51a-3b4474436625-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zhmp\" (UID: \"bf80da97-b492-4457-a51a-3b4474436625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029680 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b128968a-caa6-46be-be15-79971a310e5c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-thfng\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029710 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a602c774-b5c1-4a4e-9aa8-2952c932346e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z2gs4\" (UID: \"a602c774-b5c1-4a4e-9aa8-2952c932346e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029731 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8z2n\" (UniqueName: \"kubernetes.io/projected/ee356cb2-0c94-4402-be8c-e6895f39de08-kube-api-access-t8z2n\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029758 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/073a1588-07ff-438c-9f39-468ee8606c52-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d4nsz\" (UID: \"073a1588-07ff-438c-9f39-468ee8606c52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029777 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33c3f956-9e12-43ef-91b9-565bbec4ae50-certs\") pod \"machine-config-server-vkn5t\" (UID: \"33c3f956-9e12-43ef-91b9-565bbec4ae50\") " pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029797 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf80da97-b492-4457-a51a-3b4474436625-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zhmp\" (UID: \"bf80da97-b492-4457-a51a-3b4474436625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029828 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/924456b2-4aa3-4e7c-8d80-667783b96551-secret-volume\") pod \"collect-profiles-29421165-4fkk9\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029845 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-plugins-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029861 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b128968a-caa6-46be-be15-79971a310e5c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-thfng\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029887 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-registration-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029904 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff911481-36aa-4c42-9139-8fdb2e1e255f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f277g\" (UID: \"ff911481-36aa-4c42-9139-8fdb2e1e255f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029930 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d83fbdd0-b9fa-4a1f-ae43-929109b5cb30-profile-collector-cert\") pod \"catalog-operator-68c6474976-hprgt\" (UID: \"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.029969 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073a1588-07ff-438c-9f39-468ee8606c52-config\") pod \"kube-controller-manager-operator-78b949d7b-d4nsz\" (UID: \"073a1588-07ff-438c-9f39-468ee8606c52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.030788 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/073a1588-07ff-438c-9f39-468ee8606c52-config\") pod \"kube-controller-manager-operator-78b949d7b-d4nsz\" (UID: \"073a1588-07ff-438c-9f39-468ee8606c52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.034178 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee356cb2-0c94-4402-be8c-e6895f39de08-service-ca-bundle\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.034663 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d905424-2cf5-4338-9d16-91487e12cad3-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-4zfn7\" (UID: \"0d905424-2cf5-4338-9d16-91487e12cad3\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.034867 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-serving-cert\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.035413 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/667ac238-96a3-4f57-b308-d4d5693d40f2-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wnjtx\" (UID: \"667ac238-96a3-4f57-b308-d4d5693d40f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.035643 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d4d6a434-97e8-469a-9e85-9bbc494e8918-config\") pod \"service-ca-operator-777779d784-976g7\" (UID: \"d4d6a434-97e8-469a-9e85-9bbc494e8918\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.036109 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/ee356cb2-0c94-4402-be8c-e6895f39de08-default-certificate\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.036497 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-socket-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.037755 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/073a1588-07ff-438c-9f39-468ee8606c52-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-d4nsz\" (UID: \"073a1588-07ff-438c-9f39-468ee8606c52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.038617 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6e47b7b3-37f6-4a49-8080-27304934e01d-proxy-tls\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.039267 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6e47b7b3-37f6-4a49-8080-27304934e01d-auth-proxy-config\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.041770 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf80da97-b492-4457-a51a-3b4474436625-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zhmp\" (UID: \"bf80da97-b492-4457-a51a-3b4474436625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.045469 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/23576d5f-707f-48a1-8db4-cddfd1c0e754-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-2mdnq\" (UID: \"23576d5f-707f-48a1-8db4-cddfd1c0e754\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.046274 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce7b8eb5-3cc7-4de8-921e-5249b393ec93-config-volume\") pod \"dns-default-6xmz9\" (UID: \"ce7b8eb5-3cc7-4de8-921e-5249b393ec93\") " pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.047751 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.547729859 +0000 UTC m=+140.431351085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.048290 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.048310 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-csi-data-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.049647 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-plugins-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.109360 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.110230 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/924456b2-4aa3-4e7c-8d80-667783b96551-secret-volume\") pod \"collect-profiles-29421165-4fkk9\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.110905 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/8101c95a-1629-4b71-b12e-0fa374c9b09a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.111366 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf80da97-b492-4457-a51a-3b4474436625-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zhmp\" (UID: \"bf80da97-b492-4457-a51a-3b4474436625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.111888 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-etcd-ca\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.112826 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/742a103a-06e2-4d52-8c04-54681052838d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.113722 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-etcd-service-ca\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.114442 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ee356cb2-0c94-4402-be8c-e6895f39de08-metrics-certs\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.115318 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/4f053962-44b1-4d42-9452-aa5a80eb1e18-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6j88p\" (UID: \"4f053962-44b1-4d42-9452-aa5a80eb1e18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.115363 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23055283-72c9-4aa6-8336-a77de105a7f6-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9n44\" (UID: \"23055283-72c9-4aa6-8336-a77de105a7f6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.116279 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/d83fbdd0-b9fa-4a1f-ae43-929109b5cb30-srv-cert\") pod \"catalog-operator-68c6474976-hprgt\" (UID: \"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.116610 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/33c3f956-9e12-43ef-91b9-565bbec4ae50-certs\") pod \"machine-config-server-vkn5t\" (UID: \"33c3f956-9e12-43ef-91b9-565bbec4ae50\") " pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.119144 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-etcd-client\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.120170 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ad0a258a-10ff-4783-87dc-0718a2b1e224-webhook-cert\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.040132 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/2d491160-93fe-409b-b488-40f6dc0d4ba3-signing-cabundle\") pod \"service-ca-9c57cc56f-c9v4k\" (UID: \"2d491160-93fe-409b-b488-40f6dc0d4ba3\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.121080 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-registry-tls\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.121366 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.122030 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.123818 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/3548b663-fb10-488a-bb92-02388996febd-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-rkhzg\" (UID: \"3548b663-fb10-488a-bb92-02388996febd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.124494 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/23576d5f-707f-48a1-8db4-cddfd1c0e754-proxy-tls\") pod \"machine-config-controller-84d6567774-2mdnq\" (UID: \"23576d5f-707f-48a1-8db4-cddfd1c0e754\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.124929 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/ce7b8eb5-3cc7-4de8-921e-5249b393ec93-metrics-tls\") pod \"dns-default-6xmz9\" (UID: \"ce7b8eb5-3cc7-4de8-921e-5249b393ec93\") " pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.125931 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-registration-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.126148 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/891665dd-9904-4246-86c8-fabead4c8606-mountpoint-dir\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.127266 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff911481-36aa-4c42-9139-8fdb2e1e255f-config\") pod \"kube-apiserver-operator-766d6c64bb-f277g\" (UID: \"ff911481-36aa-4c42-9139-8fdb2e1e255f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.127563 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b128968a-caa6-46be-be15-79971a310e5c-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-thfng\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.127872 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ad0a258a-10ff-4783-87dc-0718a2b1e224-tmpfs\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.128785 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23055283-72c9-4aa6-8336-a77de105a7f6-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9n44\" (UID: \"23055283-72c9-4aa6-8336-a77de105a7f6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.129627 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ff911481-36aa-4c42-9139-8fdb2e1e255f-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-f277g\" (UID: \"ff911481-36aa-4c42-9139-8fdb2e1e255f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.133685 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f454d7d8-776e-4070-8f46-4d9b954fc5c1-metrics-tls\") pod \"dns-operator-744455d44c-mg4hh\" (UID: \"f454d7d8-776e-4070-8f46-4d9b954fc5c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.139194 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bmjm\" (UniqueName: \"kubernetes.io/projected/904f5e29-a237-4ddc-b97a-ceb88f179f6b-kube-api-access-8bmjm\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.140921 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/924456b2-4aa3-4e7c-8d80-667783b96551-config-volume\") pod \"collect-profiles-29421165-4fkk9\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.143561 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/904f5e29-a237-4ddc-b97a-ceb88f179f6b-metrics-tls\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.144941 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.145530 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b128968a-caa6-46be-be15-79971a310e5c-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-thfng\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.145966 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/a602c774-b5c1-4a4e-9aa8-2952c932346e-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-z2gs4\" (UID: \"a602c774-b5c1-4a4e-9aa8-2952c932346e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.146166 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.146294 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88-cert\") pod \"ingress-canary-ml4fq\" (UID: \"766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88\") " pod="openshift-ingress-canary/ingress-canary-ml4fq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.146895 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrj7h\" (UniqueName: \"kubernetes.io/projected/90e1876d-da75-48e1-b63a-a084de277f84-kube-api-access-mrj7h\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.147303 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.647265323 +0000 UTC m=+140.530886549 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.147809 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.149030 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.649017468 +0000 UTC m=+140.532638704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.158043 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/33c3f956-9e12-43ef-91b9-565bbec4ae50-node-bootstrap-token\") pod \"machine-config-server-vkn5t\" (UID: \"33c3f956-9e12-43ef-91b9-565bbec4ae50\") " pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.159682 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lszt\" (UniqueName: \"kubernetes.io/projected/bbd9538f-43ff-4c20-80ab-dcf783b7a558-kube-api-access-7lszt\") pod \"console-f9d7485db-sgqjs\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.159955 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/ee356cb2-0c94-4402-be8c-e6895f39de08-stats-auth\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.160332 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/90e1876d-da75-48e1-b63a-a084de277f84-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-b678t\" (UID: \"90e1876d-da75-48e1-b63a-a084de277f84\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.160357 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtw8z\" (UniqueName: \"kubernetes.io/projected/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-kube-api-access-rtw8z\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.161670 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hghgp\" (UniqueName: \"kubernetes.io/projected/8101c95a-1629-4b71-b12e-0fa374c9b09a-kube-api-access-hghgp\") pod \"machine-api-operator-5694c8668f-hp2jj\" (UID: \"8101c95a-1629-4b71-b12e-0fa374c9b09a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.162824 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/2d491160-93fe-409b-b488-40f6dc0d4ba3-signing-key\") pod \"service-ca-9c57cc56f-c9v4k\" (UID: \"2d491160-93fe-409b-b488-40f6dc0d4ba3\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.163465 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/904f5e29-a237-4ddc-b97a-ceb88f179f6b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-lspbg\" (UID: \"904f5e29-a237-4ddc-b97a-ceb88f179f6b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.163616 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/42fb68ba-836b-45d9-b680-15b7757c08ec-srv-cert\") pod \"olm-operator-6b444d44fb-hjqvk\" (UID: \"42fb68ba-836b-45d9-b680-15b7757c08ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.163806 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d4d6a434-97e8-469a-9e85-9bbc494e8918-serving-cert\") pod \"service-ca-operator-777779d784-976g7\" (UID: \"d4d6a434-97e8-469a-9e85-9bbc494e8918\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.164167 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-bound-sa-token\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.173304 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/d83fbdd0-b9fa-4a1f-ae43-929109b5cb30-profile-collector-cert\") pod \"catalog-operator-68c6474976-hprgt\" (UID: \"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.176819 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-hk4xf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.184356 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp8pd\" (UniqueName: \"kubernetes.io/projected/dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030-kube-api-access-dp8pd\") pod \"etcd-operator-b45778765-5hpj2\" (UID: \"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.184984 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/42fb68ba-836b-45d9-b680-15b7757c08ec-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hjqvk\" (UID: \"42fb68ba-836b-45d9-b680-15b7757c08ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.185798 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8775t\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-kube-api-access-8775t\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.200712 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnqkr\" (UniqueName: \"kubernetes.io/projected/89dc04b8-a46f-4df1-afde-f3d4d4b169ef-kube-api-access-qnqkr\") pod \"openshift-config-operator-7777fb866f-2zrxb\" (UID: \"89dc04b8-a46f-4df1-afde-f3d4d4b169ef\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.209776 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hln79\" (UniqueName: \"kubernetes.io/projected/ad0a258a-10ff-4783-87dc-0718a2b1e224-kube-api-access-hln79\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.211645 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6e47b7b3-37f6-4a49-8080-27304934e01d-images\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.214885 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ad0a258a-10ff-4783-87dc-0718a2b1e224-apiservice-cert\") pod \"packageserver-d55dfcdfc-mqqhj\" (UID: \"ad0a258a-10ff-4783-87dc-0718a2b1e224\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.217336 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qgz9\" (UniqueName: \"kubernetes.io/projected/3548b663-fb10-488a-bb92-02388996febd-kube-api-access-2qgz9\") pod \"cluster-samples-operator-665b6dd947-rkhzg\" (UID: \"3548b663-fb10-488a-bb92-02388996febd\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.242118 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.249769 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bf80da97-b492-4457-a51a-3b4474436625-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-8zhmp\" (UID: \"bf80da97-b492-4457-a51a-3b4474436625\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.250708 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.250914 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.750881143 +0000 UTC m=+140.634502389 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.252140 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.252731 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.75271604 +0000 UTC m=+140.636337446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.265350 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfdzn\" (UniqueName: \"kubernetes.io/projected/33c3f956-9e12-43ef-91b9-565bbec4ae50-kube-api-access-jfdzn\") pod \"machine-config-server-vkn5t\" (UID: \"33c3f956-9e12-43ef-91b9-565bbec4ae50\") " pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.277163 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwmkh\" (UniqueName: \"kubernetes.io/projected/d4d6a434-97e8-469a-9e85-9bbc494e8918-kube-api-access-wwmkh\") pod \"service-ca-operator-777779d784-976g7\" (UID: \"d4d6a434-97e8-469a-9e85-9bbc494e8918\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.305866 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" event={"ID":"25aee36a-c18c-4faf-842e-9208e234600d","Type":"ContainerStarted","Data":"f1288e5878c48f4dd5cbfb0da8d2cb577e5cfd8380040b99376210d456924591"} Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.308615 4786 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-dklgh container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.308658 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" podUID="138bf803-28c5-4a55-a0e4-48b3b2069673" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.308808 4786 patch_prober.go:28] interesting pod/console-operator-58897d9998-bbqcc container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.308898 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-bbqcc" podUID="eee2fbc3-8db7-4a8d-befa-2c692b6b41be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.309019 4786 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-crld4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.309038 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" podUID="d45771b0-7251-4e48-83dc-49322a76677c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.314103 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92zfd\" (UniqueName: \"kubernetes.io/projected/667ac238-96a3-4f57-b308-d4d5693d40f2-kube-api-access-92zfd\") pod \"control-plane-machine-set-operator-78cbb6b69f-wnjtx\" (UID: \"667ac238-96a3-4f57-b308-d4d5693d40f2\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.319395 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.325236 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g4rk\" (UniqueName: \"kubernetes.io/projected/924456b2-4aa3-4e7c-8d80-667783b96551-kube-api-access-8g4rk\") pod \"collect-profiles-29421165-4fkk9\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.334017 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccllw\" (UniqueName: \"kubernetes.io/projected/42fb68ba-836b-45d9-b680-15b7757c08ec-kube-api-access-ccllw\") pod \"olm-operator-6b444d44fb-hjqvk\" (UID: \"42fb68ba-836b-45d9-b680-15b7757c08ec\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.334235 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.344689 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.354233 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.854195133 +0000 UTC m=+140.737816359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.354330 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.355100 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.356397 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.856380381 +0000 UTC m=+140.740001767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.364192 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm449\" (UniqueName: \"kubernetes.io/projected/6e47b7b3-37f6-4a49-8080-27304934e01d-kube-api-access-bm449\") pod \"machine-config-operator-74547568cd-r5zvb\" (UID: \"6e47b7b3-37f6-4a49-8080-27304934e01d\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.396001 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75smg\" (UniqueName: \"kubernetes.io/projected/766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88-kube-api-access-75smg\") pod \"ingress-canary-ml4fq\" (UID: \"766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88\") " pod="openshift-ingress-canary/ingress-canary-ml4fq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.404038 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.405945 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvg66\" (UniqueName: \"kubernetes.io/projected/2d491160-93fe-409b-b488-40f6dc0d4ba3-kube-api-access-rvg66\") pod \"service-ca-9c57cc56f-c9v4k\" (UID: \"2d491160-93fe-409b-b488-40f6dc0d4ba3\") " pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.413182 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.421557 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9btm\" (UniqueName: \"kubernetes.io/projected/8da3ea4c-9921-4dc1-b63f-474753db5eb0-kube-api-access-g9btm\") pod \"migrator-59844c95c7-h4jkc\" (UID: \"8da3ea4c-9921-4dc1-b63f-474753db5eb0\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.435144 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gzz7\" (UniqueName: \"kubernetes.io/projected/891665dd-9904-4246-86c8-fabead4c8606-kube-api-access-2gzz7\") pod \"csi-hostpathplugin-hgb5b\" (UID: \"891665dd-9904-4246-86c8-fabead4c8606\") " pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.436330 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.436650 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.444136 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.459351 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.459846 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.460530 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.460704 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:14.960672421 +0000 UTC m=+140.844293647 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.502860 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.510957 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.513794 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqx82\" (UniqueName: \"kubernetes.io/projected/23055283-72c9-4aa6-8336-a77de105a7f6-kube-api-access-dqx82\") pod \"kube-storage-version-migrator-operator-b67b599dd-r9n44\" (UID: \"23055283-72c9-4aa6-8336-a77de105a7f6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.518951 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl862\" (UniqueName: \"kubernetes.io/projected/ce7b8eb5-3cc7-4de8-921e-5249b393ec93-kube-api-access-fl862\") pod \"dns-default-6xmz9\" (UID: \"ce7b8eb5-3cc7-4de8-921e-5249b393ec93\") " pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.519013 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9dkm\" (UniqueName: \"kubernetes.io/projected/d83fbdd0-b9fa-4a1f-ae43-929109b5cb30-kube-api-access-s9dkm\") pod \"catalog-operator-68c6474976-hprgt\" (UID: \"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.535696 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzhc2\" (UniqueName: \"kubernetes.io/projected/a602c774-b5c1-4a4e-9aa8-2952c932346e-kube-api-access-lzhc2\") pod \"multus-admission-controller-857f4d67dd-z2gs4\" (UID: \"a602c774-b5c1-4a4e-9aa8-2952c932346e\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.546957 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.554744 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.554959 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kqsh\" (UniqueName: \"kubernetes.io/projected/f454d7d8-776e-4070-8f46-4d9b954fc5c1-kube-api-access-6kqsh\") pod \"dns-operator-744455d44c-mg4hh\" (UID: \"f454d7d8-776e-4070-8f46-4d9b954fc5c1\") " pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.562891 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.563353 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.063336062 +0000 UTC m=+140.946957288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.569620 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8z2n\" (UniqueName: \"kubernetes.io/projected/ee356cb2-0c94-4402-be8c-e6895f39de08-kube-api-access-t8z2n\") pod \"router-default-5444994796-wp4v8\" (UID: \"ee356cb2-0c94-4402-be8c-e6895f39de08\") " pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.582904 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.583637 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-vkn5t" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.583929 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.585925 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.586322 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/073a1588-07ff-438c-9f39-468ee8606c52-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-d4nsz\" (UID: \"073a1588-07ff-438c-9f39-468ee8606c52\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.592864 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.601841 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.609054 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ml4fq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.613976 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q69gw\" (UniqueName: \"kubernetes.io/projected/b128968a-caa6-46be-be15-79971a310e5c-kube-api-access-q69gw\") pod \"marketplace-operator-79b997595-thfng\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.630046 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.636736 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.664192 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.664514 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.164465895 +0000 UTC m=+141.048087121 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.664593 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.665126 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.165116835 +0000 UTC m=+141.048738061 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.680477 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7kr\" (UniqueName: \"kubernetes.io/projected/23576d5f-707f-48a1-8db4-cddfd1c0e754-kube-api-access-mw7kr\") pod \"machine-config-controller-84d6567774-2mdnq\" (UID: \"23576d5f-707f-48a1-8db4-cddfd1c0e754\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.689053 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ff911481-36aa-4c42-9139-8fdb2e1e255f-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-f277g\" (UID: \"ff911481-36aa-4c42-9139-8fdb2e1e255f\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.690253 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hplp9\" (UniqueName: \"kubernetes.io/projected/4f053962-44b1-4d42-9452-aa5a80eb1e18-kube-api-access-hplp9\") pod \"package-server-manager-789f6589d5-6j88p\" (UID: \"4f053962-44b1-4d42-9452-aa5a80eb1e18\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.755835 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.781865 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.783222 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.784102 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.284076489 +0000 UTC m=+141.167697715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.785355 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.786096 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.794400 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.825311 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.838921 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:14 crc kubenswrapper[4786]: I1209 08:46:14.910539 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:14 crc kubenswrapper[4786]: E1209 08:46:14.911258 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.411221576 +0000 UTC m=+141.294842802 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.006989 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-6jxs9"] Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.053819 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.054051 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.554024335 +0000 UTC m=+141.437645561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.054168 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.054941 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.554931104 +0000 UTC m=+141.438552330 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.155555 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.156161 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.656118188 +0000 UTC m=+141.539739414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.257947 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.258489 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.758464039 +0000 UTC m=+141.642085275 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.313848 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-bbqcc" podStartSLOduration=118.313820249 podStartE2EDuration="1m58.313820249s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:15.298293939 +0000 UTC m=+141.181915185" watchObservedRunningTime="2025-12-09 08:46:15.313820249 +0000 UTC m=+141.197441485" Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.359207 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.359500 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.859467098 +0000 UTC m=+141.743088324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.359655 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.360203 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:15.86018349 +0000 UTC m=+141.743804716 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.502581 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.504411 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:16.004386484 +0000 UTC m=+141.888007870 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.597227 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" podStartSLOduration=118.59719991 podStartE2EDuration="1m58.59719991s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:15.595108715 +0000 UTC m=+141.478729941" watchObservedRunningTime="2025-12-09 08:46:15.59719991 +0000 UTC m=+141.480821146" Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.606817 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.607343 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:16.107321252 +0000 UTC m=+141.990942478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.778164 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.778626 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:16.278604672 +0000 UTC m=+142.162225898 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: W1209 08:46:15.825687 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4364d199_81a4_4500_990d_9f2bcdc66186.slice/crio-6d67d05441499414177110365f49106b8fdc01012c3d1ca345021a0381f8bf8f WatchSource:0}: Error finding container 6d67d05441499414177110365f49106b8fdc01012c3d1ca345021a0381f8bf8f: Status 404 returned error can't find the container with id 6d67d05441499414177110365f49106b8fdc01012c3d1ca345021a0381f8bf8f Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.880505 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:15 crc kubenswrapper[4786]: E1209 08:46:15.881387 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:16.381362425 +0000 UTC m=+142.264983651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.883086 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" podStartSLOduration=117.883063847 podStartE2EDuration="1m57.883063847s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:15.861746569 +0000 UTC m=+141.745367795" watchObservedRunningTime="2025-12-09 08:46:15.883063847 +0000 UTC m=+141.766685073" Dec 09 08:46:15 crc kubenswrapper[4786]: I1209 08:46:15.948766 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-6tprk" podStartSLOduration=119.948740226 podStartE2EDuration="1m59.948740226s" podCreationTimestamp="2025-12-09 08:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:15.944516965 +0000 UTC m=+141.828138191" watchObservedRunningTime="2025-12-09 08:46:15.948740226 +0000 UTC m=+141.832361462" Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.000150 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.000938 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:16.500912027 +0000 UTC m=+142.384533273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.107495 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.109153 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:16.609121089 +0000 UTC m=+142.492742495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.209571 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.210209 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:16.71019363 +0000 UTC m=+142.593814856 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.311373 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.311765 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:16.811752286 +0000 UTC m=+142.695373512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.313865 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qz5pt" podStartSLOduration=120.313853791 podStartE2EDuration="2m0.313853791s" podCreationTimestamp="2025-12-09 08:44:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:16.312762847 +0000 UTC m=+142.196384063" watchObservedRunningTime="2025-12-09 08:46:16.313853791 +0000 UTC m=+142.197475017" Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.404277 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" podStartSLOduration=118.404253762 podStartE2EDuration="1m58.404253762s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:16.379482588 +0000 UTC m=+142.263103814" watchObservedRunningTime="2025-12-09 08:46:16.404253762 +0000 UTC m=+142.287874988" Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.412375 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.412923 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wp4v8" event={"ID":"ee356cb2-0c94-4402-be8c-e6895f39de08","Type":"ContainerStarted","Data":"b9d33c8048525c2e3a528fb45517db9d3245e2c3c3fdd11b472033d48470b76e"} Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.412975 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-wp4v8" event={"ID":"ee356cb2-0c94-4402-be8c-e6895f39de08","Type":"ContainerStarted","Data":"30caf2113c74052ab7625da7889a839046da0c74214f8ae079072c81f4f5e84d"} Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.413484 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:16.913437356 +0000 UTC m=+142.797058592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.433580 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vkn5t" event={"ID":"33c3f956-9e12-43ef-91b9-565bbec4ae50","Type":"ContainerStarted","Data":"99e162aa3312fc7d5fda628c5e45f130794a974885d1bb1367f159ca38702613"} Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.433635 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-vkn5t" event={"ID":"33c3f956-9e12-43ef-91b9-565bbec4ae50","Type":"ContainerStarted","Data":"a4fed6a2c180fee8382e23d69365a9116261bc0c623754ccd81116384a18eca2"} Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.446798 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6jxs9" event={"ID":"4364d199-81a4-4500-990d-9f2bcdc66186","Type":"ContainerStarted","Data":"18e9b914851d3e00756c2bd7e13599bf2f7c2db252fd21a0e7bb4fea6c61ce65"} Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.446890 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-6jxs9" event={"ID":"4364d199-81a4-4500-990d-9f2bcdc66186","Type":"ContainerStarted","Data":"6d67d05441499414177110365f49106b8fdc01012c3d1ca345021a0381f8bf8f"} Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.448503 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-6jxs9" Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.456011 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-6jxs9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.456094 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6jxs9" podUID="4364d199-81a4-4500-990d-9f2bcdc66186" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.460577 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-wp4v8" podStartSLOduration=118.460556052 podStartE2EDuration="1m58.460556052s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:16.459794928 +0000 UTC m=+142.343416154" watchObservedRunningTime="2025-12-09 08:46:16.460556052 +0000 UTC m=+142.344177288" Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.486168 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-6jxs9" podStartSLOduration=119.486143962 podStartE2EDuration="1m59.486143962s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:16.484839581 +0000 UTC m=+142.368460807" watchObservedRunningTime="2025-12-09 08:46:16.486143962 +0000 UTC m=+142.369765188" Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.517931 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.518543 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.018524912 +0000 UTC m=+142.902146138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.519040 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-vkn5t" podStartSLOduration=5.519010026 podStartE2EDuration="5.519010026s" podCreationTimestamp="2025-12-09 08:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:16.516945203 +0000 UTC m=+142.400566439" watchObservedRunningTime="2025-12-09 08:46:16.519010026 +0000 UTC m=+142.402631252" Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.591194 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.624773 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.626243 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.126222118 +0000 UTC m=+143.009843334 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.727977 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.728451 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.228413763 +0000 UTC m=+143.112034989 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.829032 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.829256 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.329214536 +0000 UTC m=+143.212835762 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.829504 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.829935 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.329927448 +0000 UTC m=+143.213548674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.928736 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7"] Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.931501 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:16 crc kubenswrapper[4786]: E1209 08:46:16.932156 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.432107273 +0000 UTC m=+143.315728509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.939561 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj"] Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.942590 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-qt47g"] Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.966166 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:16 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:16 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:16 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:16 crc kubenswrapper[4786]: I1209 08:46:16.966226 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.039597 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.039641 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.039681 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.040312 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.540287205 +0000 UTC m=+143.423908431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.079336 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb"] Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.140882 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.141107 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.641039676 +0000 UTC m=+143.524660902 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.141293 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.142272 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.642262744 +0000 UTC m=+143.525883970 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.241502 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.241982 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.242250 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.74220948 +0000 UTC m=+143.625830716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.242383 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.242838 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.742826429 +0000 UTC m=+143.626447675 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.344224 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.344558 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.84453554 +0000 UTC m=+143.728156766 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.447217 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.447828 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:17.94780831 +0000 UTC m=+143.831429536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.459021 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" event={"ID":"ad0a258a-10ff-4783-87dc-0718a2b1e224","Type":"ContainerStarted","Data":"41ecb8414d3cb5e935b393a26250b89cb756f83700d72954cc68baef891f822f"} Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.468838 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" event={"ID":"0d905424-2cf5-4338-9d16-91487e12cad3","Type":"ContainerStarted","Data":"b168987663d514cdee7fbc1edfada522a0b16b89c511cfb15e8535d44b9a3cb9"} Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.477041 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" event={"ID":"f011e81c-b463-4190-9ae5-73703f390ae8","Type":"ContainerStarted","Data":"7efbd8bcf52684392218db70f16b4dd31af11b299dd88a7f79dd5828052049f6"} Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.478554 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-6jxs9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.478613 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6jxs9" podUID="4364d199-81a4-4500-990d-9f2bcdc66186" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.486098 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-jxwdv" Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.548742 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.550059 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.050037497 +0000 UTC m=+143.933658733 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.651248 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.652654 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.152631845 +0000 UTC m=+144.036253071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.762918 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.763154 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.263116907 +0000 UTC m=+144.146738143 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.763321 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.763669 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.263656453 +0000 UTC m=+144.147277679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.867125 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.868067 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.368045717 +0000 UTC m=+144.251666943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.925033 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:17 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:17 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:17 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.925118 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:17 crc kubenswrapper[4786]: I1209 08:46:17.970134 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:17 crc kubenswrapper[4786]: E1209 08:46:17.970739 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.470717557 +0000 UTC m=+144.354338823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.071355 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.071837 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.571816369 +0000 UTC m=+144.455437595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.142523 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-sgqjs"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.173440 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.174009 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.673987995 +0000 UTC m=+144.557609221 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.240872 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5hpj2"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.255066 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-kzf64"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.273864 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hk4xf"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.277298 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.277651 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.777618075 +0000 UTC m=+144.661239321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.277839 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.279977 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.779959377 +0000 UTC m=+144.663580603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.381371 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.382081 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.882062291 +0000 UTC m=+144.765683527 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.382326 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.382734 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.882725441 +0000 UTC m=+144.766346667 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.483461 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.483940 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:18.983916846 +0000 UTC m=+144.867538072 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.486465 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sgqjs" event={"ID":"bbd9538f-43ff-4c20-80ab-dcf783b7a558","Type":"ContainerStarted","Data":"d14c4112ededad85a35113947d00e909e22e93f8df07a36216b374a2300b2fc4"} Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.490644 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" event={"ID":"0d905424-2cf5-4338-9d16-91487e12cad3","Type":"ContainerStarted","Data":"92218f7550ad3f8c2c26597da5dfc109b9410313307b521a14c13ad2e4f71526"} Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.492961 4786 generic.go:334] "Generic (PLEG): container finished" podID="f011e81c-b463-4190-9ae5-73703f390ae8" containerID="802db9f888eeb9553a2eefea51aa836a4a98855d14b9a49ca53aca1657ae70b0" exitCode=0 Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.493194 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" event={"ID":"f011e81c-b463-4190-9ae5-73703f390ae8","Type":"ContainerDied","Data":"802db9f888eeb9553a2eefea51aa836a4a98855d14b9a49ca53aca1657ae70b0"} Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.499438 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" event={"ID":"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030","Type":"ContainerStarted","Data":"c2e18de307567e35cf1760287440232e9f127b358a7300703809c95899de7d7b"} Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.500632 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" event={"ID":"486f2347-35fe-4d75-ae30-5074d59b49a1","Type":"ContainerStarted","Data":"3d5c7699aed5805bbe0e398abe57bda97aa40961202491366af60fa1bcaf6d25"} Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.501590 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" event={"ID":"b6302642-cdce-43da-b8fe-60bbcf3a4eaf","Type":"ContainerStarted","Data":"2b118a9b851ed957462cc4dc06158674b2f966586a2d7ba049b559a35f2baaa5"} Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.515140 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" event={"ID":"89dc04b8-a46f-4df1-afde-f3d4d4b169ef","Type":"ContainerStarted","Data":"b0fd4454ebcdb1dfe51f24ff3b1b9ced5407895336ae09dc3a2362d99e6b18c6"} Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.520739 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-4zfn7" podStartSLOduration=121.520702612 podStartE2EDuration="2m1.520702612s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:18.518858205 +0000 UTC m=+144.402479441" watchObservedRunningTime="2025-12-09 08:46:18.520702612 +0000 UTC m=+144.404323838" Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.539875 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" event={"ID":"ad0a258a-10ff-4783-87dc-0718a2b1e224","Type":"ContainerStarted","Data":"a960bc4edccabaebadd8d3a90a31bf7b8ee3677f2ad8cc036da9edfcb892bc9e"} Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.590912 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:18 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:18 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:18 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.591517 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.600967 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.607239 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.107223384 +0000 UTC m=+144.990844610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.701471 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" podStartSLOduration=120.701452634 podStartE2EDuration="2m0.701452634s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:18.628230223 +0000 UTC m=+144.511851449" watchObservedRunningTime="2025-12-09 08:46:18.701452634 +0000 UTC m=+144.585073860" Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.702277 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.702660 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.202633681 +0000 UTC m=+145.086254907 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.702816 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.703178 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.203171167 +0000 UTC m=+145.086792393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.733733 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.733927 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.750052 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.755610 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-c9v4k"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.761203 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hp2jj"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.761247 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-z2gs4"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.804645 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.805076 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.305054773 +0000 UTC m=+145.188675989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.810522 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ml4fq"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.814269 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mg4hh"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.815657 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-6xmz9"] Dec 09 08:46:18 crc kubenswrapper[4786]: W1209 08:46:18.821194 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod073a1588_07ff_438c_9f39_468ee8606c52.slice/crio-4eea3b659b17ca1d64474233f94fe4d54b7a12af06b9c7d71ecfbbb3d9104923 WatchSource:0}: Error finding container 4eea3b659b17ca1d64474233f94fe4d54b7a12af06b9c7d71ecfbbb3d9104923: Status 404 returned error can't find the container with id 4eea3b659b17ca1d64474233f94fe4d54b7a12af06b9c7d71ecfbbb3d9104923 Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.827853 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.855980 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.859096 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.861622 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-976g7"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.862926 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.868234 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.871397 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-hgb5b"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.875835 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-thfng"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.877646 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.879660 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.883151 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.883204 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.884766 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.887813 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.890200 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9"] Dec 09 08:46:18 crc kubenswrapper[4786]: I1209 08:46:18.907210 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:18 crc kubenswrapper[4786]: E1209 08:46:18.908156 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.408138277 +0000 UTC m=+145.291759503 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.008594 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.008997 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.508980931 +0000 UTC m=+145.392602157 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: W1209 08:46:19.033672 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42fb68ba_836b_45d9_b680_15b7757c08ec.slice/crio-b07e02f6083f644a1d09890e3fd6e11a8f05fe8d45117873b03f2a8e911a2784 WatchSource:0}: Error finding container b07e02f6083f644a1d09890e3fd6e11a8f05fe8d45117873b03f2a8e911a2784: Status 404 returned error can't find the container with id b07e02f6083f644a1d09890e3fd6e11a8f05fe8d45117873b03f2a8e911a2784 Dec 09 08:46:19 crc kubenswrapper[4786]: W1209 08:46:19.088832 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod924456b2_4aa3_4e7c_8d80_667783b96551.slice/crio-6f526d42a498c46fb8a651ad406832cb93d134fccc4e2b5801e68a9435838842 WatchSource:0}: Error finding container 6f526d42a498c46fb8a651ad406832cb93d134fccc4e2b5801e68a9435838842: Status 404 returned error can't find the container with id 6f526d42a498c46fb8a651ad406832cb93d134fccc4e2b5801e68a9435838842 Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.109949 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.110355 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.610340381 +0000 UTC m=+145.493961607 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.210905 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.211669 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.711643299 +0000 UTC m=+145.595264525 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.211808 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.216562 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.71650421 +0000 UTC m=+145.600125436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.313405 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.313886 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.813860966 +0000 UTC m=+145.697482192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.416240 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.417012 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:19.916984111 +0000 UTC m=+145.800605337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.519989 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.520411 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.020386244 +0000 UTC m=+145.904007470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.592591 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:19 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:19 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:19 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.592662 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.611351 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" event={"ID":"ff911481-36aa-4c42-9139-8fdb2e1e255f","Type":"ContainerStarted","Data":"6a1e3d268334c6ef3c9d9ac307be90a3b62065743cfe88cae88077f6772bbaac"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.622904 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.623621 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" event={"ID":"924456b2-4aa3-4e7c-8d80-667783b96551","Type":"ContainerStarted","Data":"6f526d42a498c46fb8a651ad406832cb93d134fccc4e2b5801e68a9435838842"} Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.624481 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.124392116 +0000 UTC m=+146.008013352 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.634183 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" event={"ID":"90e1876d-da75-48e1-b63a-a084de277f84","Type":"ContainerStarted","Data":"b45519152d1b59b2d2505ed150e677307792ac557e9126a601ed296ad7353c59"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.654055 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" event={"ID":"f011e81c-b463-4190-9ae5-73703f390ae8","Type":"ContainerStarted","Data":"fe8b462a7834c48cbd383590bef18df0315a43eb11350d5622c5e4a8b216ae0d"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.673976 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" event={"ID":"23055283-72c9-4aa6-8336-a77de105a7f6","Type":"ContainerStarted","Data":"90e96eb979f89c223ce9cc747a970029f47148fae7896ba519105e1176a6c0d8"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.676763 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" event={"ID":"8101c95a-1629-4b71-b12e-0fa374c9b09a","Type":"ContainerStarted","Data":"5218b73ac89e683b1178fabba3fd61e00fdf7304aeb6cfd78adf69b2ac3316c3"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.680002 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" event={"ID":"dfa8f63d-9ec9-466a-bd6e-1bd8cfbc2030","Type":"ContainerStarted","Data":"c4c57729fa5482470053537f9a5403e53466642c19d9ef2aa49b52b869a8044c"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.681845 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" event={"ID":"4f053962-44b1-4d42-9452-aa5a80eb1e18","Type":"ContainerStarted","Data":"086f75d0eca554897428253bb2b16f1f153fb3a6252019a2f2fa00a9531cad90"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.681885 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" event={"ID":"4f053962-44b1-4d42-9452-aa5a80eb1e18","Type":"ContainerStarted","Data":"5f18210f0efbff4d46e15b57b255a7aee61dbd922a05d5bdfee4c39ceda488fd"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.683580 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" event={"ID":"d4d6a434-97e8-469a-9e85-9bbc494e8918","Type":"ContainerStarted","Data":"469ef937ed5cd383722b11486293bc6938e1a497e126c3b6b689bffb0534f160"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.686216 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" event={"ID":"23576d5f-707f-48a1-8db4-cddfd1c0e754","Type":"ContainerStarted","Data":"ea80aa3b35a437a4242fccbbdfdcb64a8edb708e800f153a815457e2d36b19b5"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.690865 4786 generic.go:334] "Generic (PLEG): container finished" podID="89dc04b8-a46f-4df1-afde-f3d4d4b169ef" containerID="76ff13cff1c718db7be2393b4ae911fa6cc61e253ecbf654a83c1a1bd3b3715b" exitCode=0 Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.690951 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" event={"ID":"89dc04b8-a46f-4df1-afde-f3d4d4b169ef","Type":"ContainerDied","Data":"76ff13cff1c718db7be2393b4ae911fa6cc61e253ecbf654a83c1a1bd3b3715b"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.692569 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ml4fq" event={"ID":"766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88","Type":"ContainerStarted","Data":"3e85cc46232e820fefcf50443982452a422a6c9dbf54b7cc27f7a5ca43188a6b"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.695175 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6xmz9" event={"ID":"ce7b8eb5-3cc7-4de8-921e-5249b393ec93","Type":"ContainerStarted","Data":"e48bb9efceb9d7487bfacff233f08f1d08b9d247bb0ef84704de7f1c4fde38b0"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.705046 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" event={"ID":"42fb68ba-836b-45d9-b680-15b7757c08ec","Type":"ContainerStarted","Data":"b07e02f6083f644a1d09890e3fd6e11a8f05fe8d45117873b03f2a8e911a2784"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.706670 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5hpj2" podStartSLOduration=122.706644556 podStartE2EDuration="2m2.706644556s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:19.705457889 +0000 UTC m=+145.589079135" watchObservedRunningTime="2025-12-09 08:46:19.706644556 +0000 UTC m=+145.590265782" Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.720202 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" event={"ID":"bf80da97-b492-4457-a51a-3b4474436625","Type":"ContainerStarted","Data":"7701b142cc3d52469859365befb76f61509d71ecf6ea3f96e556660d03ff31d4"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.724142 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" event={"ID":"b128968a-caa6-46be-be15-79971a310e5c","Type":"ContainerStarted","Data":"171cb8d8b7827cbcac3ebae06ce7d5b91499de4ff6d7eeae91e33786722bf012"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.726207 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.726532 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.226470088 +0000 UTC m=+146.110091324 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.726786 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.727301 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.227280023 +0000 UTC m=+146.110901329 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.737341 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" event={"ID":"073a1588-07ff-438c-9f39-468ee8606c52","Type":"ContainerStarted","Data":"4eea3b659b17ca1d64474233f94fe4d54b7a12af06b9c7d71ecfbbb3d9104923"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.743229 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" event={"ID":"2d491160-93fe-409b-b488-40f6dc0d4ba3","Type":"ContainerStarted","Data":"266101fd8631fefdbd06cda66bb26aa42eef4b50505e35e4a1bc697618c467df"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.743284 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" event={"ID":"2d491160-93fe-409b-b488-40f6dc0d4ba3","Type":"ContainerStarted","Data":"8b4fdb36cb671effde222c09588332661d956c41acbcfef20b2240b59e3b969b"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.747727 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc" event={"ID":"8da3ea4c-9921-4dc1-b63f-474753db5eb0","Type":"ContainerStarted","Data":"681ad3d2220e67432cd68b208121ef5b9df597e77f0ef044fbc79eb526061715"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.753527 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" event={"ID":"667ac238-96a3-4f57-b308-d4d5693d40f2","Type":"ContainerStarted","Data":"a949360f10ee92d671a718a50bc68d168957d0ecc55c0e6ec499f0b7aee5cffd"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.761768 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-c9v4k" podStartSLOduration=121.761744537 podStartE2EDuration="2m1.761744537s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:19.757715333 +0000 UTC m=+145.641336569" watchObservedRunningTime="2025-12-09 08:46:19.761744537 +0000 UTC m=+145.645365763" Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.771891 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" event={"ID":"3548b663-fb10-488a-bb92-02388996febd","Type":"ContainerStarted","Data":"9271971a5208feca3c967fe1ce084aba477d42a6ff126f4c633352976402a060"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.787789 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" event={"ID":"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30","Type":"ContainerStarted","Data":"0dd80447e4431448b7ed5daae5dc98466ec36f537ad2de7bfa32d8d1a48e0f84"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.828141 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.830491 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.330457509 +0000 UTC m=+146.214078735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.842614 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sgqjs" event={"ID":"bbd9538f-43ff-4c20-80ab-dcf783b7a558","Type":"ContainerStarted","Data":"5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.863235 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-sgqjs" podStartSLOduration=122.863201291 podStartE2EDuration="2m2.863201291s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:19.861491038 +0000 UTC m=+145.745112284" watchObservedRunningTime="2025-12-09 08:46:19.863201291 +0000 UTC m=+145.746822517" Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.894601 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" event={"ID":"904f5e29-a237-4ddc-b97a-ceb88f179f6b","Type":"ContainerStarted","Data":"7b474bf97b5c5564c3024b3182c17f86ed8007473bdf9303c44f4300ac680a67"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.900450 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" event={"ID":"891665dd-9904-4246-86c8-fabead4c8606","Type":"ContainerStarted","Data":"38d0a9d46645663cb9ad480f69b326a7a7f7f0f439476c3d7add96cd00e75f09"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.909988 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" event={"ID":"f454d7d8-776e-4070-8f46-4d9b954fc5c1","Type":"ContainerStarted","Data":"4ac2fe3d1325cd6fff1b6e1a6b94617897e1bd07f6acc1a3942770389ceabdd6"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.928008 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" event={"ID":"486f2347-35fe-4d75-ae30-5074d59b49a1","Type":"ContainerStarted","Data":"8eb1c1e0d95276886555aee4b124ae9bafb4000c6449381eb4d2b78e5833d913"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.932200 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:19 crc kubenswrapper[4786]: E1209 08:46:19.934090 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.434075659 +0000 UTC m=+146.317696885 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.935484 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" event={"ID":"b6302642-cdce-43da-b8fe-60bbcf3a4eaf","Type":"ContainerStarted","Data":"396c1a5ca9530279841f74c25041b29da131c496ee02e3fdf6e160a54eef8d65"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.935859 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.938061 4786 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-hk4xf container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.938091 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" podUID="b6302642-cdce-43da-b8fe-60bbcf3a4eaf" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.941804 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" event={"ID":"6e47b7b3-37f6-4a49-8080-27304934e01d","Type":"ContainerStarted","Data":"bc985e760947c63dea235b05be4832cd32ab53e6878c0ac316fd4806f65cfdf4"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.947103 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" event={"ID":"a602c774-b5c1-4a4e-9aa8-2952c932346e","Type":"ContainerStarted","Data":"d7279fbeadf4886eb9ebcb3c4014257d59633ecc29f19e9bd6afeabe5518dfa7"} Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.948066 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.953753 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-kzf64" podStartSLOduration=122.953727696 podStartE2EDuration="2m2.953727696s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:19.946239735 +0000 UTC m=+145.829860951" watchObservedRunningTime="2025-12-09 08:46:19.953727696 +0000 UTC m=+145.837348922" Dec 09 08:46:19 crc kubenswrapper[4786]: I1209 08:46:19.980004 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" podStartSLOduration=122.979981007 podStartE2EDuration="2m2.979981007s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:19.977952885 +0000 UTC m=+145.861574111" watchObservedRunningTime="2025-12-09 08:46:19.979981007 +0000 UTC m=+145.863602233" Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.023608 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-mqqhj" Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.040096 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.042205 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.542151957 +0000 UTC m=+146.425773193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.141945 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.142350 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.642337371 +0000 UTC m=+146.525958597 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.243116 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.243546 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.743530925 +0000 UTC m=+146.627152151 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.346457 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.347182 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.847167916 +0000 UTC m=+146.730789142 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.477390 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.477833 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:20.977817411 +0000 UTC m=+146.861438637 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.580675 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.080659517 +0000 UTC m=+146.964280733 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.580246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.647390 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:20 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:20 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:20 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.647521 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.686877 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.687399 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.187378452 +0000 UTC m=+147.070999688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.789101 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.789716 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.289698753 +0000 UTC m=+147.173319979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.890377 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.890997 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.39097353 +0000 UTC m=+147.274594746 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:20 crc kubenswrapper[4786]: I1209 08:46:20.993316 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:20 crc kubenswrapper[4786]: E1209 08:46:20.995522 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.495500018 +0000 UTC m=+147.379121244 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.095261 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6xmz9" event={"ID":"ce7b8eb5-3cc7-4de8-921e-5249b393ec93","Type":"ContainerStarted","Data":"104b7bb5fb101a0c5842befd86770f5285db205806ea64fb83b1c75e254aa3fe"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.101065 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.101467 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.60144772 +0000 UTC m=+147.485068946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.127319 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc" event={"ID":"8da3ea4c-9921-4dc1-b63f-474753db5eb0","Type":"ContainerStarted","Data":"040bf140d10f5503f0fcf2e488888e39475e89fd6c858c29a25d0d3c383169b5"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.160974 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" event={"ID":"904f5e29-a237-4ddc-b97a-ceb88f179f6b","Type":"ContainerStarted","Data":"88f38c7b71c30a6f7509e5e5ec54378eb2915e99d6f8a689de530e9add13aeca"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.202623 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.204574 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.704561114 +0000 UTC m=+147.588182340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.216113 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" event={"ID":"23576d5f-707f-48a1-8db4-cddfd1c0e754","Type":"ContainerStarted","Data":"8d291c2ef8c2b8a49544ca126ad58e7eca6b99745f08f51a4d4a3f147c2d63d0"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.216161 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" event={"ID":"90e1876d-da75-48e1-b63a-a084de277f84","Type":"ContainerStarted","Data":"be6b8d85ea4c1462ff5873c529bb4db51e2a637b0a98bcaa968ce972f64aa3e9"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.264045 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-b678t" podStartSLOduration=124.26401627 podStartE2EDuration="2m4.26401627s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.255293761 +0000 UTC m=+147.138914987" watchObservedRunningTime="2025-12-09 08:46:21.26401627 +0000 UTC m=+147.147637496" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.264289 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc" podStartSLOduration=123.264284419 podStartE2EDuration="2m3.264284419s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.155261832 +0000 UTC m=+147.038883088" watchObservedRunningTime="2025-12-09 08:46:21.264284419 +0000 UTC m=+147.147905645" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.267770 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" event={"ID":"6e47b7b3-37f6-4a49-8080-27304934e01d","Type":"ContainerStarted","Data":"837fd8a20ace3910e3c52680c3bca327d1dc2788e793d1e8594bfe707236e9d1"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.267832 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" event={"ID":"6e47b7b3-37f6-4a49-8080-27304934e01d","Type":"ContainerStarted","Data":"24f6a851dced863ef6e23440461d6ac1cc644070f664745febf0c81d1f937c76"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.305202 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.305374 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.805342697 +0000 UTC m=+147.688963933 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.305559 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.306968 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.806960506 +0000 UTC m=+147.690581732 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.311016 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" event={"ID":"a602c774-b5c1-4a4e-9aa8-2952c932346e","Type":"ContainerStarted","Data":"e8671005c1dc97a80898de401228e74acaf46ea4cfdbad10d4401b3d87dad397"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.314832 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" event={"ID":"23055283-72c9-4aa6-8336-a77de105a7f6","Type":"ContainerStarted","Data":"8cc87ea09c36cac3310a74240c87824afba0c7df86881a62a408f3090437064b"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.321671 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-r5zvb" podStartSLOduration=123.321649039 podStartE2EDuration="2m3.321649039s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.319415771 +0000 UTC m=+147.203037027" watchObservedRunningTime="2025-12-09 08:46:21.321649039 +0000 UTC m=+147.205270265" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.341518 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" event={"ID":"b128968a-caa6-46be-be15-79971a310e5c","Type":"ContainerStarted","Data":"8b284375f145834322161af06139a89e6edbd55a682391331b64b5ffd97b6e9a"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.344217 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.344285 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-thfng container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.344319 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.360564 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" podStartSLOduration=123.360547061 podStartE2EDuration="2m3.360547061s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.360029525 +0000 UTC m=+147.243650751" watchObservedRunningTime="2025-12-09 08:46:21.360547061 +0000 UTC m=+147.244168287" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.386880 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" event={"ID":"073a1588-07ff-438c-9f39-468ee8606c52","Type":"ContainerStarted","Data":"a4edf6648d4b7e564cca8aec74e490122d246d0d46fc5905923ebcb8940b357a"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.406822 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.409128 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:21.909105551 +0000 UTC m=+147.792726777 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.413745 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" event={"ID":"8101c95a-1629-4b71-b12e-0fa374c9b09a","Type":"ContainerStarted","Data":"db8bb0e959b7fadbbb654216995e955524c62260001f48d27e5e4e09c2c0c754"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.413830 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" event={"ID":"8101c95a-1629-4b71-b12e-0fa374c9b09a","Type":"ContainerStarted","Data":"cdbf751ed5a5290b0d9c17254fa683f4252442fb157560670563308e5c46488c"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.437317 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" event={"ID":"4f053962-44b1-4d42-9452-aa5a80eb1e18","Type":"ContainerStarted","Data":"13862a7aa9a41b06207c6fbffe69714057c0055bcbcc984456bb30ecaf0cccf2"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.438199 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.454563 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" event={"ID":"42fb68ba-836b-45d9-b680-15b7757c08ec","Type":"ContainerStarted","Data":"eb574d579e060b45fd04a0d0843e036581804bb1545ba41df52268858b158160"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.456269 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.460448 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" event={"ID":"667ac238-96a3-4f57-b308-d4d5693d40f2","Type":"ContainerStarted","Data":"adc3a5bdb973343560464f739776355ef825a388841f8267e82af73d22f827f4"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.462993 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-r9n44" podStartSLOduration=123.462977864 podStartE2EDuration="2m3.462977864s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.400546767 +0000 UTC m=+147.284167993" watchObservedRunningTime="2025-12-09 08:46:21.462977864 +0000 UTC m=+147.346599090" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.463169 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-d4nsz" podStartSLOduration=123.463165061 podStartE2EDuration="2m3.463165061s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.46121426 +0000 UTC m=+147.344835486" watchObservedRunningTime="2025-12-09 08:46:21.463165061 +0000 UTC m=+147.346786287" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.477293 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" event={"ID":"3548b663-fb10-488a-bb92-02388996febd","Type":"ContainerStarted","Data":"35043d25f5beb2bbbe2e46c5b9771ea965ea0a40b4c9396886419bc51884a272"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.479628 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" event={"ID":"d83fbdd0-b9fa-4a1f-ae43-929109b5cb30","Type":"ContainerStarted","Data":"0aa8af61a7b7858529c493358dd64e5c9b480e62b9171005f101959d6ff0d41d"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.480928 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.482440 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" event={"ID":"bf80da97-b492-4457-a51a-3b4474436625","Type":"ContainerStarted","Data":"5c671b6807349c0a67f8958cfa47283e149fe6ac2a2bd6fcd0fa66194a14c939"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.489882 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" podStartSLOduration=123.489852105 podStartE2EDuration="2m3.489852105s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.489211834 +0000 UTC m=+147.372833060" watchObservedRunningTime="2025-12-09 08:46:21.489852105 +0000 UTC m=+147.373473331" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.491527 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" event={"ID":"d4d6a434-97e8-469a-9e85-9bbc494e8918","Type":"ContainerStarted","Data":"d13c59185b8c3f1dbb0b6f7867751090059359d2621607737e4281862f660950"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.494506 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" event={"ID":"89dc04b8-a46f-4df1-afde-f3d4d4b169ef","Type":"ContainerStarted","Data":"d782367dbb1108e7756c45d10a32ce444cb01840d5e138606521890232a93fd3"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.494697 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.496216 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" event={"ID":"f454d7d8-776e-4070-8f46-4d9b954fc5c1","Type":"ContainerStarted","Data":"d6caccefecc0c442abc80a48cffbaef72cac67d46323f8ebe03be6ecc074c5d0"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.500044 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ml4fq" event={"ID":"766e2ebd-e852-4eab-bbdc-3cfe8a5ceb88","Type":"ContainerStarted","Data":"5f0b143e5cb1be84ea08089e4b8a07569057c683ded1f9f2d46ed84c0fdef96b"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.502187 4786 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-hjqvk container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" start-of-body= Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.502365 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" podUID="42fb68ba-836b-45d9-b680-15b7757c08ec" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.39:8443/healthz\": dial tcp 10.217.0.39:8443: connect: connection refused" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.502987 4786 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hprgt container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.503098 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" podUID="d83fbdd0-b9fa-4a1f-ae43-929109b5cb30" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.503650 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" event={"ID":"924456b2-4aa3-4e7c-8d80-667783b96551","Type":"ContainerStarted","Data":"5fb10c980b3f78c7c9d6d2b633d6b3796cc52a3943c5b0fd00aed9b7b4c0cec3"} Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.509564 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.511078 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wnjtx" podStartSLOduration=123.511030959 podStartE2EDuration="2m3.511030959s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.503915629 +0000 UTC m=+147.387536855" watchObservedRunningTime="2025-12-09 08:46:21.511030959 +0000 UTC m=+147.394652185" Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.514644 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.014618729 +0000 UTC m=+147.898240025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.536996 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" podStartSLOduration=123.536966669 podStartE2EDuration="2m3.536966669s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.533871294 +0000 UTC m=+147.417492520" watchObservedRunningTime="2025-12-09 08:46:21.536966669 +0000 UTC m=+147.420587905" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.564639 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-8zhmp" podStartSLOduration=123.564619233 podStartE2EDuration="2m3.564619233s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.562925661 +0000 UTC m=+147.446546877" watchObservedRunningTime="2025-12-09 08:46:21.564619233 +0000 UTC m=+147.448240459" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.588597 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" podStartSLOduration=123.588563233 podStartE2EDuration="2m3.588563233s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.586296372 +0000 UTC m=+147.469917598" watchObservedRunningTime="2025-12-09 08:46:21.588563233 +0000 UTC m=+147.472184459" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.599655 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:21 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:21 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:21 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.599761 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.610822 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hp2jj" podStartSLOduration=123.610792619 podStartE2EDuration="2m3.610792619s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.608045524 +0000 UTC m=+147.491666760" watchObservedRunningTime="2025-12-09 08:46:21.610792619 +0000 UTC m=+147.494413845" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.620924 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.623783 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.123747059 +0000 UTC m=+148.007368285 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.624921 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.626208 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.126185645 +0000 UTC m=+148.009806871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.641650 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" podStartSLOduration=123.641618341 podStartE2EDuration="2m3.641618341s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.639243908 +0000 UTC m=+147.522865174" watchObservedRunningTime="2025-12-09 08:46:21.641618341 +0000 UTC m=+147.525239597" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.669653 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" podStartSLOduration=123.669629796 podStartE2EDuration="2m3.669629796s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.66717585 +0000 UTC m=+147.550797106" watchObservedRunningTime="2025-12-09 08:46:21.669629796 +0000 UTC m=+147.553251022" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.748407 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.751609 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.249882824 +0000 UTC m=+148.133504060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.753263 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-976g7" podStartSLOduration=123.753236978 podStartE2EDuration="2m3.753236978s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.712245602 +0000 UTC m=+147.595866858" watchObservedRunningTime="2025-12-09 08:46:21.753236978 +0000 UTC m=+147.636858214" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.754308 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" podStartSLOduration=124.75429989 podStartE2EDuration="2m4.75429989s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.747603764 +0000 UTC m=+147.631224990" watchObservedRunningTime="2025-12-09 08:46:21.75429989 +0000 UTC m=+147.637921116" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.784896 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ml4fq" podStartSLOduration=10.784869185 podStartE2EDuration="10.784869185s" podCreationTimestamp="2025-12-09 08:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.78084052 +0000 UTC m=+147.664461756" watchObservedRunningTime="2025-12-09 08:46:21.784869185 +0000 UTC m=+147.668490411" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.861742 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.862409 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.362380649 +0000 UTC m=+148.246001875 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.863802 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" podStartSLOduration=81.863781782 podStartE2EDuration="1m21.863781782s" podCreationTimestamp="2025-12-09 08:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:21.862522043 +0000 UTC m=+147.746143269" watchObservedRunningTime="2025-12-09 08:46:21.863781782 +0000 UTC m=+147.747403008" Dec 09 08:46:21 crc kubenswrapper[4786]: I1209 08:46:21.962875 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:21 crc kubenswrapper[4786]: E1209 08:46:21.963467 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.463389688 +0000 UTC m=+148.347010914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.065674 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:22 crc kubenswrapper[4786]: E1209 08:46:22.066157 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.566142591 +0000 UTC m=+148.449763817 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.253138 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:22 crc kubenswrapper[4786]: E1209 08:46:22.256918 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.756898621 +0000 UTC m=+148.640519847 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.264403 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.273396 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.274492 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-bbqcc" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.315569 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.385103 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:22 crc kubenswrapper[4786]: E1209 08:46:22.386877 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.886859075 +0000 UTC m=+148.770480301 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.486085 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:22 crc kubenswrapper[4786]: E1209 08:46:22.486656 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:22.986638187 +0000 UTC m=+148.870259413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.526544 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-6xmz9" event={"ID":"ce7b8eb5-3cc7-4de8-921e-5249b393ec93","Type":"ContainerStarted","Data":"9713213f880e9541db64fde7df25ecf39903aea537ffb8321b654d45b4c0cef1"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.527652 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.530103 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" event={"ID":"3548b663-fb10-488a-bb92-02388996febd","Type":"ContainerStarted","Data":"b9ba8261d5a3a2bb8804fd0407d0430698a0afb5eb97b9f34eb3474847286815"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.532872 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" event={"ID":"23576d5f-707f-48a1-8db4-cddfd1c0e754","Type":"ContainerStarted","Data":"ddeafb8ab3242ef7a01279b052268297f058fbda2691e2c01e73a9ac08e72cb7"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.534938 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" event={"ID":"f454d7d8-776e-4070-8f46-4d9b954fc5c1","Type":"ContainerStarted","Data":"686ebedd9d17c4bb815e918e6f777ccf4fe224df0cc2c2d3cd3472de1659f09c"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.541712 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-f277g" event={"ID":"ff911481-36aa-4c42-9139-8fdb2e1e255f","Type":"ContainerStarted","Data":"3504bcb50d442e86704d5b19b40e688fed3a0f37f307469f4c53e9be2e1a2a04"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.555034 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-h4jkc" event={"ID":"8da3ea4c-9921-4dc1-b63f-474753db5eb0","Type":"ContainerStarted","Data":"394f33074d489a5fc7db9df9b4b3ac1f433fa94a16e600ad55c671644863c6af"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.579054 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" event={"ID":"f011e81c-b463-4190-9ae5-73703f390ae8","Type":"ContainerStarted","Data":"29c571e5b892be55f798fcb8f5da0ce14c56e922119478cdd897c70bab5e5d69"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.589462 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:22 crc kubenswrapper[4786]: E1209 08:46:22.591161 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.091146524 +0000 UTC m=+148.974767750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.596242 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-z2gs4" event={"ID":"a602c774-b5c1-4a4e-9aa8-2952c932346e","Type":"ContainerStarted","Data":"bef804b15821dd18e0ff291aa78cf2157c7429afdca6fa6089eb5c034e3c2925"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.596892 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:22 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:22 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:22 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.603156 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.636219 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" event={"ID":"904f5e29-a237-4ddc-b97a-ceb88f179f6b","Type":"ContainerStarted","Data":"9c9eee495463859e3d744ec99b8156b68a891f254fd949929afed485e6ecb4dc"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.655481 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" event={"ID":"891665dd-9904-4246-86c8-fabead4c8606","Type":"ContainerStarted","Data":"dfa13a254ae890950132a5897d7b16f789c053a7260aedc7a2441665da61c215"} Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.663607 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-thfng container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.663684 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.685906 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-2mdnq" podStartSLOduration=124.68587455 podStartE2EDuration="2m4.68587455s" podCreationTimestamp="2025-12-09 08:44:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:22.6016856 +0000 UTC m=+148.485306816" watchObservedRunningTime="2025-12-09 08:46:22.68587455 +0000 UTC m=+148.569495776" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.686583 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-6xmz9" podStartSLOduration=11.686577411 podStartE2EDuration="11.686577411s" podCreationTimestamp="2025-12-09 08:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:22.674149737 +0000 UTC m=+148.557770963" watchObservedRunningTime="2025-12-09 08:46:22.686577411 +0000 UTC m=+148.570198637" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.690710 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:22 crc kubenswrapper[4786]: E1209 08:46:22.692838 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.192820564 +0000 UTC m=+149.076441790 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.694675 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hjqvk" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.744993 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mg4hh" podStartSLOduration=125.744952444 podStartE2EDuration="2m5.744952444s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:22.732812409 +0000 UTC m=+148.616433655" watchObservedRunningTime="2025-12-09 08:46:22.744952444 +0000 UTC m=+148.628573670" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.778803 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-rkhzg" podStartSLOduration=125.778783848 podStartE2EDuration="2m5.778783848s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:22.777075906 +0000 UTC m=+148.660697132" watchObservedRunningTime="2025-12-09 08:46:22.778783848 +0000 UTC m=+148.662405074" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.795547 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:22 crc kubenswrapper[4786]: E1209 08:46:22.796002 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.295985019 +0000 UTC m=+149.179606245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.809789 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-lspbg" podStartSLOduration=125.809760295 podStartE2EDuration="2m5.809760295s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:22.796818085 +0000 UTC m=+148.680439311" watchObservedRunningTime="2025-12-09 08:46:22.809760295 +0000 UTC m=+148.693381511" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.810573 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hprgt" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.877485 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" podStartSLOduration=125.877458636 podStartE2EDuration="2m5.877458636s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:22.872928866 +0000 UTC m=+148.756550102" watchObservedRunningTime="2025-12-09 08:46:22.877458636 +0000 UTC m=+148.761079862" Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.896748 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:22 crc kubenswrapper[4786]: E1209 08:46:22.897237 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.397215426 +0000 UTC m=+149.280836652 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:22 crc kubenswrapper[4786]: I1209 08:46:22.999410 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.000161 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.500138604 +0000 UTC m=+149.383759840 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.102377 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.102988 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.60296491 +0000 UTC m=+149.486586136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.204458 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.205008 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.704978891 +0000 UTC m=+149.588600117 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.306662 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.306910 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.806847566 +0000 UTC m=+149.690468802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.307103 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.307824 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.807815056 +0000 UTC m=+149.691436282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.322299 4786 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2zrxb container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.322396 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" podUID="89dc04b8-a46f-4df1-afde-f3d4d4b169ef" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.322402 4786 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-2zrxb container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.322547 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" podUID="89dc04b8-a46f-4df1-afde-f3d4d4b169ef" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.408055 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.408336 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.908296349 +0000 UTC m=+149.791917585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.409014 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.409616 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:23.90960614 +0000 UTC m=+149.793227366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.510045 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.510314 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.010270868 +0000 UTC m=+149.893892094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.510563 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.510949 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.010933198 +0000 UTC m=+149.894554424 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.592501 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:23 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:23 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:23 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.592636 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.611993 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.612372 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.112351031 +0000 UTC m=+149.995972267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.666447 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" event={"ID":"891665dd-9904-4246-86c8-fabead4c8606","Type":"ContainerStarted","Data":"e528b62cff3a38145118e3f167cee74cf8f465dc1b36e2471e31348633ed2b14"} Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.667513 4786 generic.go:334] "Generic (PLEG): container finished" podID="924456b2-4aa3-4e7c-8d80-667783b96551" containerID="5fb10c980b3f78c7c9d6d2b633d6b3796cc52a3943c5b0fd00aed9b7b4c0cec3" exitCode=0 Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.667779 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" event={"ID":"924456b2-4aa3-4e7c-8d80-667783b96551","Type":"ContainerDied","Data":"5fb10c980b3f78c7c9d6d2b633d6b3796cc52a3943c5b0fd00aed9b7b4c0cec3"} Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.713630 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.716718 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.216700533 +0000 UTC m=+150.100321759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.760979 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.761313 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.763238 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-6jxs9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.763294 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6jxs9" podUID="4364d199-81a4-4500-990d-9f2bcdc66186" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.763711 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-6jxs9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.763775 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6jxs9" podUID="4364d199-81a4-4500-990d-9f2bcdc66186" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.815738 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.815848 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.315827824 +0000 UTC m=+150.199449050 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.816241 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.816850 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.316823875 +0000 UTC m=+150.200445101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.917530 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.917814 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.417753352 +0000 UTC m=+150.301374578 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:23 crc kubenswrapper[4786]: I1209 08:46:23.918377 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:23 crc kubenswrapper[4786]: E1209 08:46:23.918956 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.418927968 +0000 UTC m=+150.302549194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.019161 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:24 crc kubenswrapper[4786]: E1209 08:46:24.019520 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.519503304 +0000 UTC m=+150.403124530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.120265 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:24 crc kubenswrapper[4786]: E1209 08:46:24.120946 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.620915746 +0000 UTC m=+150.504536972 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.221916 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:24 crc kubenswrapper[4786]: E1209 08:46:24.222216 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.722164583 +0000 UTC m=+150.605785809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.224259 4786 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.314439 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g4qx2"] Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.315890 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.322838 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.323806 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-catalog-content\") pod \"certified-operators-g4qx2\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.323890 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.323918 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-utilities\") pod \"certified-operators-g4qx2\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.323958 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxbz7\" (UniqueName: \"kubernetes.io/projected/7a4abbd7-999e-4b15-bfc1-a93939734b36-kube-api-access-rxbz7\") pod \"certified-operators-g4qx2\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: E1209 08:46:24.324249 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-09 08:46:24.824230264 +0000 UTC m=+150.707851700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-q5r5p" (UID: "742a103a-06e2-4d52-8c04-54681052838d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.329860 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4qx2"] Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.345791 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.345899 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.347960 4786 patch_prober.go:28] interesting pod/console-f9d7485db-sgqjs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.348048 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sgqjs" podUID="bbd9538f-43ff-4c20-80ab-dcf783b7a558" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.413832 4786 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-09T08:46:24.224299329Z","Handler":null,"Name":""} Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.424927 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.425036 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-catalog-content\") pod \"certified-operators-g4qx2\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.425123 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-utilities\") pod \"certified-operators-g4qx2\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.425170 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxbz7\" (UniqueName: \"kubernetes.io/projected/7a4abbd7-999e-4b15-bfc1-a93939734b36-kube-api-access-rxbz7\") pod \"certified-operators-g4qx2\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.426250 4786 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.426302 4786 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.427213 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-catalog-content\") pod \"certified-operators-g4qx2\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.428209 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-utilities\") pod \"certified-operators-g4qx2\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.451057 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxbz7\" (UniqueName: \"kubernetes.io/projected/7a4abbd7-999e-4b15-bfc1-a93939734b36-kube-api-access-rxbz7\") pod \"certified-operators-g4qx2\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.498756 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4rm92"] Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.500025 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.504883 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.511168 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rm92"] Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.524816 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.527028 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.585996 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.593193 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:24 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:24 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:24 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.593305 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.627961 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.628008 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.629573 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-utilities\") pod \"community-operators-4rm92\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.629759 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-catalog-content\") pod \"community-operators-4rm92\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.629942 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rdd8\" (UniqueName: \"kubernetes.io/projected/7388aae9-507a-42ff-84cb-9860de1f9f84-kube-api-access-8rdd8\") pod \"community-operators-4rm92\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.631741 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.759126 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-catalog-content\") pod \"community-operators-4rm92\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.759304 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rdd8\" (UniqueName: \"kubernetes.io/projected/7388aae9-507a-42ff-84cb-9860de1f9f84-kube-api-access-8rdd8\") pod \"community-operators-4rm92\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.759416 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-utilities\") pod \"community-operators-4rm92\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.760208 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-utilities\") pod \"community-operators-4rm92\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.760588 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-catalog-content\") pod \"community-operators-4rm92\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.837647 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mtcjp"] Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.841092 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.842170 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-thfng container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.842249 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.842758 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-thfng container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.842784 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.853090 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtcjp"] Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.878067 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" event={"ID":"891665dd-9904-4246-86c8-fabead4c8606","Type":"ContainerStarted","Data":"5335dfbd338c0b85c614ac8a0aa015aff183bb7187fbed952768408a1167f2f6"} Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.878188 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" event={"ID":"891665dd-9904-4246-86c8-fabead4c8606","Type":"ContainerStarted","Data":"e49c748f67ee29d641266b2d3b601b8dc7f77a1505b163420c801aa6b9712ef4"} Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.922674 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvtx8\" (UniqueName: \"kubernetes.io/projected/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-kube-api-access-jvtx8\") pod \"certified-operators-mtcjp\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.922727 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-utilities\") pod \"certified-operators-mtcjp\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.922875 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-catalog-content\") pod \"certified-operators-mtcjp\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.927100 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-c2h4k"] Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.927442 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rdd8\" (UniqueName: \"kubernetes.io/projected/7388aae9-507a-42ff-84cb-9860de1f9f84-kube-api-access-8rdd8\") pod \"community-operators-4rm92\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.929072 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.975865 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2h4k"] Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.980834 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-q5r5p\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:24 crc kubenswrapper[4786]: I1209 08:46:24.990027 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-hgb5b" podStartSLOduration=13.989995074 podStartE2EDuration="13.989995074s" podCreationTimestamp="2025-12-09 08:46:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:24.949974138 +0000 UTC m=+150.833595364" watchObservedRunningTime="2025-12-09 08:46:24.989995074 +0000 UTC m=+150.873616300" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.015791 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.015890 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.025603 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvtx8\" (UniqueName: \"kubernetes.io/projected/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-kube-api-access-jvtx8\") pod \"certified-operators-mtcjp\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.025738 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-utilities\") pod \"certified-operators-mtcjp\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.025780 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-utilities\") pod \"community-operators-c2h4k\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.025827 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9kc\" (UniqueName: \"kubernetes.io/projected/427c77e2-5f54-46b4-b2e5-ed6034c20d49-kube-api-access-tf9kc\") pod \"community-operators-c2h4k\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.025921 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-catalog-content\") pod \"community-operators-c2h4k\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.025959 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-catalog-content\") pod \"certified-operators-mtcjp\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.032913 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-utilities\") pod \"certified-operators-mtcjp\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.044614 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-catalog-content\") pod \"certified-operators-mtcjp\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.135891 4786 patch_prober.go:28] interesting pod/apiserver-76f77b778f-qt47g container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]log ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]etcd ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/generic-apiserver-start-informers ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/max-in-flight-filter ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 09 08:46:25 crc kubenswrapper[4786]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 09 08:46:25 crc kubenswrapper[4786]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/project.openshift.io-projectcache ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/openshift.io-startinformers ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 09 08:46:25 crc kubenswrapper[4786]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 09 08:46:25 crc kubenswrapper[4786]: livez check failed Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.135978 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" podUID="f011e81c-b463-4190-9ae5-73703f390ae8" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.137101 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.137129 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-utilities\") pod \"community-operators-c2h4k\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.137163 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf9kc\" (UniqueName: \"kubernetes.io/projected/427c77e2-5f54-46b4-b2e5-ed6034c20d49-kube-api-access-tf9kc\") pod \"community-operators-c2h4k\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.137189 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.137227 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-catalog-content\") pod \"community-operators-c2h4k\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.139134 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-catalog-content\") pod \"community-operators-c2h4k\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.140231 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-utilities\") pod \"community-operators-c2h4k\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.145511 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.153245 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.171898 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.172700 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvtx8\" (UniqueName: \"kubernetes.io/projected/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-kube-api-access-jvtx8\") pod \"certified-operators-mtcjp\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.183161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf9kc\" (UniqueName: \"kubernetes.io/projected/427c77e2-5f54-46b4-b2e5-ed6034c20d49-kube-api-access-tf9kc\") pod \"community-operators-c2h4k\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.239532 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.240123 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.240219 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.264132 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.270858 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.287341 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.294416 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.305976 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.326031 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.328232 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.403130 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.472448 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.473418 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.502833 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.503305 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.523953 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.538231 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g4qx2"] Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.590637 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:25 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:25 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:25 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.590709 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.656418 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/847d588a-e346-406f-b901-c43cbe2bbf2d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"847d588a-e346-406f-b901-c43cbe2bbf2d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.656965 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/847d588a-e346-406f-b901-c43cbe2bbf2d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"847d588a-e346-406f-b901-c43cbe2bbf2d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.815995 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/847d588a-e346-406f-b901-c43cbe2bbf2d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"847d588a-e346-406f-b901-c43cbe2bbf2d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.816064 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/847d588a-e346-406f-b901-c43cbe2bbf2d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"847d588a-e346-406f-b901-c43cbe2bbf2d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.816187 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/847d588a-e346-406f-b901-c43cbe2bbf2d-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"847d588a-e346-406f-b901-c43cbe2bbf2d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.890435 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/847d588a-e346-406f-b901-c43cbe2bbf2d-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"847d588a-e346-406f-b901-c43cbe2bbf2d\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.924316 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4qx2" event={"ID":"7a4abbd7-999e-4b15-bfc1-a93939734b36","Type":"ContainerStarted","Data":"b98f0118bdc4f114e2c957c0b38c587c154ec50d92539e1303874da6d43c24a3"} Dec 09 08:46:25 crc kubenswrapper[4786]: I1209 08:46:25.959402 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.023606 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g4rk\" (UniqueName: \"kubernetes.io/projected/924456b2-4aa3-4e7c-8d80-667783b96551-kube-api-access-8g4rk\") pod \"924456b2-4aa3-4e7c-8d80-667783b96551\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.023768 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/924456b2-4aa3-4e7c-8d80-667783b96551-config-volume\") pod \"924456b2-4aa3-4e7c-8d80-667783b96551\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.023843 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/924456b2-4aa3-4e7c-8d80-667783b96551-secret-volume\") pod \"924456b2-4aa3-4e7c-8d80-667783b96551\" (UID: \"924456b2-4aa3-4e7c-8d80-667783b96551\") " Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.028160 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/924456b2-4aa3-4e7c-8d80-667783b96551-config-volume" (OuterVolumeSpecName: "config-volume") pod "924456b2-4aa3-4e7c-8d80-667783b96551" (UID: "924456b2-4aa3-4e7c-8d80-667783b96551"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.049627 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4rm92"] Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.053326 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/924456b2-4aa3-4e7c-8d80-667783b96551-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "924456b2-4aa3-4e7c-8d80-667783b96551" (UID: "924456b2-4aa3-4e7c-8d80-667783b96551"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.059646 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/924456b2-4aa3-4e7c-8d80-667783b96551-kube-api-access-8g4rk" (OuterVolumeSpecName: "kube-api-access-8g4rk") pod "924456b2-4aa3-4e7c-8d80-667783b96551" (UID: "924456b2-4aa3-4e7c-8d80-667783b96551"). InnerVolumeSpecName "kube-api-access-8g4rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.137810 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g4rk\" (UniqueName: \"kubernetes.io/projected/924456b2-4aa3-4e7c-8d80-667783b96551-kube-api-access-8g4rk\") on node \"crc\" DevicePath \"\"" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.138326 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/924456b2-4aa3-4e7c-8d80-667783b96551-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.138338 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/924456b2-4aa3-4e7c-8d80-667783b96551-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.171868 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.299691 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9274k"] Dec 09 08:46:26 crc kubenswrapper[4786]: E1209 08:46:26.300137 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="924456b2-4aa3-4e7c-8d80-667783b96551" containerName="collect-profiles" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.300239 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="924456b2-4aa3-4e7c-8d80-667783b96551" containerName="collect-profiles" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.300412 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="924456b2-4aa3-4e7c-8d80-667783b96551" containerName="collect-profiles" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.301206 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.314614 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.335849 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-2zrxb" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.363856 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-utilities\") pod \"redhat-marketplace-9274k\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.363959 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-catalog-content\") pod \"redhat-marketplace-9274k\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.363984 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4jdq\" (UniqueName: \"kubernetes.io/projected/621ad974-644c-45fc-a6ba-045ca1f9e033-kube-api-access-l4jdq\") pod \"redhat-marketplace-9274k\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.398758 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-c2h4k"] Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.416623 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9274k"] Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.464820 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4jdq\" (UniqueName: \"kubernetes.io/projected/621ad974-644c-45fc-a6ba-045ca1f9e033-kube-api-access-l4jdq\") pod \"redhat-marketplace-9274k\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.465080 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-utilities\") pod \"redhat-marketplace-9274k\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.465271 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-catalog-content\") pod \"redhat-marketplace-9274k\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.465921 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-catalog-content\") pod \"redhat-marketplace-9274k\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.466702 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-utilities\") pod \"redhat-marketplace-9274k\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.487537 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4jdq\" (UniqueName: \"kubernetes.io/projected/621ad974-644c-45fc-a6ba-045ca1f9e033-kube-api-access-l4jdq\") pod \"redhat-marketplace-9274k\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: E1209 08:46:26.497806 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7388aae9_507a_42ff_84cb_9860de1f9f84.slice/crio-conmon-506843de98d41e3592dacc9fb8150f9bdb939764e2fafb61aa299292cbe51c15.scope\": RecentStats: unable to find data in memory cache]" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.513889 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mtcjp"] Dec 09 08:46:26 crc kubenswrapper[4786]: W1209 08:46:26.535494 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dbf4a5b_0892_4445_b9d0_972b3187c8ee.slice/crio-e6a75af7a5a1dd0bcb3c9bb6ed402b1b2665ffda5862db6425665c5774959092 WatchSource:0}: Error finding container e6a75af7a5a1dd0bcb3c9bb6ed402b1b2665ffda5862db6425665c5774959092: Status 404 returned error can't find the container with id e6a75af7a5a1dd0bcb3c9bb6ed402b1b2665ffda5862db6425665c5774959092 Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.573948 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q5r5p"] Dec 09 08:46:26 crc kubenswrapper[4786]: W1209 08:46:26.584037 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-42065c62e0e2b35a03153a16bad0d9c1dd0f1971b1f1a8368f69290b42112a32 WatchSource:0}: Error finding container 42065c62e0e2b35a03153a16bad0d9c1dd0f1971b1f1a8368f69290b42112a32: Status 404 returned error can't find the container with id 42065c62e0e2b35a03153a16bad0d9c1dd0f1971b1f1a8368f69290b42112a32 Dec 09 08:46:26 crc kubenswrapper[4786]: W1209 08:46:26.587102 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod742a103a_06e2_4d52_8c04_54681052838d.slice/crio-768ed804e8f5ea0d7af8a7baad140ef9eb217bc69c2a5d8535c981e76a99ba13 WatchSource:0}: Error finding container 768ed804e8f5ea0d7af8a7baad140ef9eb217bc69c2a5d8535c981e76a99ba13: Status 404 returned error can't find the container with id 768ed804e8f5ea0d7af8a7baad140ef9eb217bc69c2a5d8535c981e76a99ba13 Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.590907 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:26 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:26 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:26 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.590958 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.685696 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pfbdd"] Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.688999 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.694242 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfbdd"] Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.701126 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.770333 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qfr8\" (UniqueName: \"kubernetes.io/projected/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-kube-api-access-7qfr8\") pod \"redhat-marketplace-pfbdd\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.770460 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-catalog-content\") pod \"redhat-marketplace-pfbdd\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.771285 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-utilities\") pod \"redhat-marketplace-pfbdd\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.872636 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-utilities\") pod \"redhat-marketplace-pfbdd\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.872729 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qfr8\" (UniqueName: \"kubernetes.io/projected/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-kube-api-access-7qfr8\") pod \"redhat-marketplace-pfbdd\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.872777 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-catalog-content\") pod \"redhat-marketplace-pfbdd\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.873452 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-catalog-content\") pod \"redhat-marketplace-pfbdd\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.873787 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-utilities\") pod \"redhat-marketplace-pfbdd\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.911001 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qfr8\" (UniqueName: \"kubernetes.io/projected/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-kube-api-access-7qfr8\") pod \"redhat-marketplace-pfbdd\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.928568 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"42065c62e0e2b35a03153a16bad0d9c1dd0f1971b1f1a8368f69290b42112a32"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.929914 4786 generic.go:334] "Generic (PLEG): container finished" podID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerID="20affb1f3cefdc9738cd7182997aa73ed3637b2f841cd4cab3b1faca85e9cb6f" exitCode=0 Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.929960 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2h4k" event={"ID":"427c77e2-5f54-46b4-b2e5-ed6034c20d49","Type":"ContainerDied","Data":"20affb1f3cefdc9738cd7182997aa73ed3637b2f841cd4cab3b1faca85e9cb6f"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.929986 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2h4k" event={"ID":"427c77e2-5f54-46b4-b2e5-ed6034c20d49","Type":"ContainerStarted","Data":"9cfe51389b7ea2aac50adbfc92ee9c81fa769ea9a780056208ab3d1376b6c404"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.932542 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.935563 4786 generic.go:334] "Generic (PLEG): container finished" podID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerID="21ba71531a7e71faa901b89ec16c8c786b28b3d82b013ba7911b713bd96f1dea" exitCode=0 Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.935684 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4qx2" event={"ID":"7a4abbd7-999e-4b15-bfc1-a93939734b36","Type":"ContainerDied","Data":"21ba71531a7e71faa901b89ec16c8c786b28b3d82b013ba7911b713bd96f1dea"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.950122 4786 generic.go:334] "Generic (PLEG): container finished" podID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerID="506843de98d41e3592dacc9fb8150f9bdb939764e2fafb61aa299292cbe51c15" exitCode=0 Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.951060 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rm92" event={"ID":"7388aae9-507a-42ff-84cb-9860de1f9f84","Type":"ContainerDied","Data":"506843de98d41e3592dacc9fb8150f9bdb939764e2fafb61aa299292cbe51c15"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.951095 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rm92" event={"ID":"7388aae9-507a-42ff-84cb-9860de1f9f84","Type":"ContainerStarted","Data":"fa6bf785304ca48ef9506a9500062db48982085efe2776e5713787e41d596d65"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.953974 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.982376 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"2042ff5672e292da8804de8462b53c734db13d9f7c50e6ca6aca85e4fe0a5c94"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.982466 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"45dfcf9c06db14b9ec973ae143f3686a7c2d33094e3d2b3fb4f4730b7cc34825"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.984808 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"11f6c1e5a1b0e7dbb6827046016adbf72867da11bbc1bedb1780d821c178a515"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.985767 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtcjp" event={"ID":"2dbf4a5b-0892-4445-b9d0-972b3187c8ee","Type":"ContainerStarted","Data":"e6a75af7a5a1dd0bcb3c9bb6ed402b1b2665ffda5862db6425665c5774959092"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.986916 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" event={"ID":"742a103a-06e2-4d52-8c04-54681052838d","Type":"ContainerStarted","Data":"768ed804e8f5ea0d7af8a7baad140ef9eb217bc69c2a5d8535c981e76a99ba13"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.988403 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" event={"ID":"924456b2-4aa3-4e7c-8d80-667783b96551","Type":"ContainerDied","Data":"6f526d42a498c46fb8a651ad406832cb93d134fccc4e2b5801e68a9435838842"} Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.988489 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f526d42a498c46fb8a651ad406832cb93d134fccc4e2b5801e68a9435838842" Dec 09 08:46:26 crc kubenswrapper[4786]: I1209 08:46:26.988600 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9" Dec 09 08:46:27 crc kubenswrapper[4786]: W1209 08:46:27.015039 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod847d588a_e346_406f_b901_c43cbe2bbf2d.slice/crio-40bb23accb03e2601bde512eb65e86de60ede02582247ca5b68716317511c972 WatchSource:0}: Error finding container 40bb23accb03e2601bde512eb65e86de60ede02582247ca5b68716317511c972: Status 404 returned error can't find the container with id 40bb23accb03e2601bde512eb65e86de60ede02582247ca5b68716317511c972 Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.396601 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9274k"] Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.475804 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.491341 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d9b7s"] Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.492852 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.497297 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.513962 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9b7s"] Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.590054 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:27 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:27 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:27 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.590135 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.660937 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-utilities\") pod \"redhat-operators-d9b7s\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.660992 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2vhs\" (UniqueName: \"kubernetes.io/projected/7d9968ce-71c9-4b6d-912f-5f03be10945d-kube-api-access-n2vhs\") pod \"redhat-operators-d9b7s\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.661053 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-catalog-content\") pod \"redhat-operators-d9b7s\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.762248 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-catalog-content\") pod \"redhat-operators-d9b7s\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.762780 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-utilities\") pod \"redhat-operators-d9b7s\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.762804 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2vhs\" (UniqueName: \"kubernetes.io/projected/7d9968ce-71c9-4b6d-912f-5f03be10945d-kube-api-access-n2vhs\") pod \"redhat-operators-d9b7s\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.762978 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-catalog-content\") pod \"redhat-operators-d9b7s\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.763201 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-utilities\") pod \"redhat-operators-d9b7s\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.786905 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2vhs\" (UniqueName: \"kubernetes.io/projected/7d9968ce-71c9-4b6d-912f-5f03be10945d-kube-api-access-n2vhs\") pod \"redhat-operators-d9b7s\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.817993 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.870943 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfbdd"] Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.883848 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4pq6p"] Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.885004 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.918487 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4pq6p"] Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.965098 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-utilities\") pod \"redhat-operators-4pq6p\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.965277 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-catalog-content\") pod \"redhat-operators-4pq6p\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:27 crc kubenswrapper[4786]: I1209 08:46:27.965371 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zqst\" (UniqueName: \"kubernetes.io/projected/8fa34078-7115-46e8-9c2f-c82730cf3b41-kube-api-access-7zqst\") pod \"redhat-operators-4pq6p\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.066746 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-catalog-content\") pod \"redhat-operators-4pq6p\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.066843 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zqst\" (UniqueName: \"kubernetes.io/projected/8fa34078-7115-46e8-9c2f-c82730cf3b41-kube-api-access-7zqst\") pod \"redhat-operators-4pq6p\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.066901 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-utilities\") pod \"redhat-operators-4pq6p\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.067417 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-utilities\") pod \"redhat-operators-4pq6p\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.067753 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-catalog-content\") pod \"redhat-operators-4pq6p\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.118577 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e6b2c58fb1a6bfcadc9f57d9b16b92ab98a4997a9d064b28111f222bae1a8d88"} Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.124111 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9274k" event={"ID":"621ad974-644c-45fc-a6ba-045ca1f9e033","Type":"ContainerStarted","Data":"6336b3ac6d131d761fc3859924c758c2d28b88bdce59c4bab062c0f94766796d"} Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.128683 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zqst\" (UniqueName: \"kubernetes.io/projected/8fa34078-7115-46e8-9c2f-c82730cf3b41-kube-api-access-7zqst\") pod \"redhat-operators-4pq6p\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.133309 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"847d588a-e346-406f-b901-c43cbe2bbf2d","Type":"ContainerStarted","Data":"40bb23accb03e2601bde512eb65e86de60ede02582247ca5b68716317511c972"} Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.216222 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.262339 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9b7s"] Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.593089 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:28 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:28 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:28 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.593626 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.685311 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4pq6p"] Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.762612 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:28 crc kubenswrapper[4786]: I1209 08:46:28.767937 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-qt47g" Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.296703 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9b7s" event={"ID":"7d9968ce-71c9-4b6d-912f-5f03be10945d","Type":"ContainerStarted","Data":"6e372757b8040c7b27b5608dcf5541380cb7584737b2b30e72cc61e1f417117c"} Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.301804 4786 generic.go:334] "Generic (PLEG): container finished" podID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerID="10034f72990723562893bae3d391eb9d5ac3678241d593e5ef0f392cea129b7b" exitCode=0 Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.301848 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9274k" event={"ID":"621ad974-644c-45fc-a6ba-045ca1f9e033","Type":"ContainerDied","Data":"10034f72990723562893bae3d391eb9d5ac3678241d593e5ef0f392cea129b7b"} Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.304710 4786 generic.go:334] "Generic (PLEG): container finished" podID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerID="f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814" exitCode=0 Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.304790 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtcjp" event={"ID":"2dbf4a5b-0892-4445-b9d0-972b3187c8ee","Type":"ContainerDied","Data":"f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814"} Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.311066 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"847d588a-e346-406f-b901-c43cbe2bbf2d","Type":"ContainerStarted","Data":"950791a4eebaa74495b702a3baf2f01197ca0a49fd53acc44cac0e7f69c0837e"} Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.329583 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" event={"ID":"742a103a-06e2-4d52-8c04-54681052838d","Type":"ContainerStarted","Data":"7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a"} Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.329638 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.333508 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pq6p" event={"ID":"8fa34078-7115-46e8-9c2f-c82730cf3b41","Type":"ContainerStarted","Data":"2714b4e6cf93848794e7aac5406a2eb961d1dee0366c8048b0cc96e102e39dff"} Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.335635 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"8166dec0cf0fe35e59b48637efcedb92e404cb9613c682468ec7e8a3e01c28dd"} Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.359037 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfbdd" event={"ID":"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3","Type":"ContainerStarted","Data":"3a6fe8e46677a5a4aeeac3cf902e91f592a08e546e1a3f2b3f9a703aafd0c8ed"} Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.359098 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.370407 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.370386338 podStartE2EDuration="4.370386338s" podCreationTimestamp="2025-12-09 08:46:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:29.368087217 +0000 UTC m=+155.251708443" watchObservedRunningTime="2025-12-09 08:46:29.370386338 +0000 UTC m=+155.254007564" Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.390346 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" podStartSLOduration=132.390323223 podStartE2EDuration="2m12.390323223s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:46:29.38953652 +0000 UTC m=+155.273157746" watchObservedRunningTime="2025-12-09 08:46:29.390323223 +0000 UTC m=+155.273944449" Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.601815 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:29 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:29 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:29 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.601884 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:29 crc kubenswrapper[4786]: I1209 08:46:29.640352 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-6xmz9" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.371585 4786 generic.go:334] "Generic (PLEG): container finished" podID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerID="75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e" exitCode=0 Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.371848 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pq6p" event={"ID":"8fa34078-7115-46e8-9c2f-c82730cf3b41","Type":"ContainerDied","Data":"75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e"} Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.378325 4786 generic.go:334] "Generic (PLEG): container finished" podID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerID="1cda6151b3557121bdc05b20707a326ca2de52c99334496810270b45767bb0d2" exitCode=0 Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.378557 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfbdd" event={"ID":"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3","Type":"ContainerDied","Data":"1cda6151b3557121bdc05b20707a326ca2de52c99334496810270b45767bb0d2"} Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.382274 4786 generic.go:334] "Generic (PLEG): container finished" podID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerID="608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910" exitCode=0 Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.382343 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9b7s" event={"ID":"7d9968ce-71c9-4b6d-912f-5f03be10945d","Type":"ContainerDied","Data":"608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910"} Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.390794 4786 generic.go:334] "Generic (PLEG): container finished" podID="847d588a-e346-406f-b901-c43cbe2bbf2d" containerID="950791a4eebaa74495b702a3baf2f01197ca0a49fd53acc44cac0e7f69c0837e" exitCode=0 Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.390855 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"847d588a-e346-406f-b901-c43cbe2bbf2d","Type":"ContainerDied","Data":"950791a4eebaa74495b702a3baf2f01197ca0a49fd53acc44cac0e7f69c0837e"} Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.428382 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.429983 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.431635 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.441895 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.454153 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.591165 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:30 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:30 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:30 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.591236 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.628133 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.628345 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.730087 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.730178 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.730291 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.758125 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:30 crc kubenswrapper[4786]: I1209 08:46:30.822916 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:31 crc kubenswrapper[4786]: I1209 08:46:31.320065 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 09 08:46:31 crc kubenswrapper[4786]: W1209 08:46:31.393464 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7561e76b_41b7_4e6a_8983_bb6d16a069d3.slice/crio-0652dc3232c1f8a58216a56715d25786cab34c4be0eb55299388884b37ce8971 WatchSource:0}: Error finding container 0652dc3232c1f8a58216a56715d25786cab34c4be0eb55299388884b37ce8971: Status 404 returned error can't find the container with id 0652dc3232c1f8a58216a56715d25786cab34c4be0eb55299388884b37ce8971 Dec 09 08:46:31 crc kubenswrapper[4786]: I1209 08:46:31.595558 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:31 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:31 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:31 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:31 crc kubenswrapper[4786]: I1209 08:46:31.596694 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.076600 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.190040 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/847d588a-e346-406f-b901-c43cbe2bbf2d-kube-api-access\") pod \"847d588a-e346-406f-b901-c43cbe2bbf2d\" (UID: \"847d588a-e346-406f-b901-c43cbe2bbf2d\") " Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.190171 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/847d588a-e346-406f-b901-c43cbe2bbf2d-kubelet-dir\") pod \"847d588a-e346-406f-b901-c43cbe2bbf2d\" (UID: \"847d588a-e346-406f-b901-c43cbe2bbf2d\") " Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.190828 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/847d588a-e346-406f-b901-c43cbe2bbf2d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "847d588a-e346-406f-b901-c43cbe2bbf2d" (UID: "847d588a-e346-406f-b901-c43cbe2bbf2d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.202805 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/847d588a-e346-406f-b901-c43cbe2bbf2d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "847d588a-e346-406f-b901-c43cbe2bbf2d" (UID: "847d588a-e346-406f-b901-c43cbe2bbf2d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.291801 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/847d588a-e346-406f-b901-c43cbe2bbf2d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.292221 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/847d588a-e346-406f-b901-c43cbe2bbf2d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.468679 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7561e76b-41b7-4e6a-8983-bb6d16a069d3","Type":"ContainerStarted","Data":"7ea65a14521374f0b4300678de4073139389f414228803535f09975d82485c1f"} Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.468750 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7561e76b-41b7-4e6a-8983-bb6d16a069d3","Type":"ContainerStarted","Data":"0652dc3232c1f8a58216a56715d25786cab34c4be0eb55299388884b37ce8971"} Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.483975 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"847d588a-e346-406f-b901-c43cbe2bbf2d","Type":"ContainerDied","Data":"40bb23accb03e2601bde512eb65e86de60ede02582247ca5b68716317511c972"} Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.484226 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40bb23accb03e2601bde512eb65e86de60ede02582247ca5b68716317511c972" Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.484550 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.603766 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:32 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:32 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:32 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:32 crc kubenswrapper[4786]: I1209 08:46:32.603852 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:33 crc kubenswrapper[4786]: I1209 08:46:33.506184 4786 generic.go:334] "Generic (PLEG): container finished" podID="7561e76b-41b7-4e6a-8983-bb6d16a069d3" containerID="7ea65a14521374f0b4300678de4073139389f414228803535f09975d82485c1f" exitCode=0 Dec 09 08:46:33 crc kubenswrapper[4786]: I1209 08:46:33.506248 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7561e76b-41b7-4e6a-8983-bb6d16a069d3","Type":"ContainerDied","Data":"7ea65a14521374f0b4300678de4073139389f414228803535f09975d82485c1f"} Dec 09 08:46:33 crc kubenswrapper[4786]: I1209 08:46:33.593436 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:33 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:33 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:33 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:33 crc kubenswrapper[4786]: I1209 08:46:33.593528 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:33 crc kubenswrapper[4786]: I1209 08:46:33.900974 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-6jxs9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 09 08:46:33 crc kubenswrapper[4786]: I1209 08:46:33.901055 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-6jxs9" podUID="4364d199-81a4-4500-990d-9f2bcdc66186" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 09 08:46:33 crc kubenswrapper[4786]: I1209 08:46:33.901279 4786 patch_prober.go:28] interesting pod/downloads-7954f5f757-6jxs9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Dec 09 08:46:33 crc kubenswrapper[4786]: I1209 08:46:33.901310 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-6jxs9" podUID="4364d199-81a4-4500-990d-9f2bcdc66186" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Dec 09 08:46:34 crc kubenswrapper[4786]: I1209 08:46:34.346985 4786 patch_prober.go:28] interesting pod/console-f9d7485db-sgqjs container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Dec 09 08:46:34 crc kubenswrapper[4786]: I1209 08:46:34.347077 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-sgqjs" podUID="bbd9538f-43ff-4c20-80ab-dcf783b7a558" containerName="console" probeResult="failure" output="Get \"https://10.217.0.11:8443/health\": dial tcp 10.217.0.11:8443: connect: connection refused" Dec 09 08:46:34 crc kubenswrapper[4786]: I1209 08:46:34.414360 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:46:34 crc kubenswrapper[4786]: I1209 08:46:34.592745 4786 patch_prober.go:28] interesting pod/router-default-5444994796-wp4v8 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 09 08:46:34 crc kubenswrapper[4786]: [-]has-synced failed: reason withheld Dec 09 08:46:34 crc kubenswrapper[4786]: [+]process-running ok Dec 09 08:46:34 crc kubenswrapper[4786]: healthz check failed Dec 09 08:46:34 crc kubenswrapper[4786]: I1209 08:46:34.593362 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-wp4v8" podUID="ee356cb2-0c94-4402-be8c-e6895f39de08" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 09 08:46:34 crc kubenswrapper[4786]: I1209 08:46:34.880384 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.439613 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.593559 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.598183 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-wp4v8" Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.611089 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7561e76b-41b7-4e6a-8983-bb6d16a069d3","Type":"ContainerDied","Data":"0652dc3232c1f8a58216a56715d25786cab34c4be0eb55299388884b37ce8971"} Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.611145 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0652dc3232c1f8a58216a56715d25786cab34c4be0eb55299388884b37ce8971" Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.611272 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.641442 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kube-api-access\") pod \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\" (UID: \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\") " Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.642493 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kubelet-dir\") pod \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\" (UID: \"7561e76b-41b7-4e6a-8983-bb6d16a069d3\") " Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.643029 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7561e76b-41b7-4e6a-8983-bb6d16a069d3" (UID: "7561e76b-41b7-4e6a-8983-bb6d16a069d3"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.665595 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7561e76b-41b7-4e6a-8983-bb6d16a069d3" (UID: "7561e76b-41b7-4e6a-8983-bb6d16a069d3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.745956 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 08:46:35 crc kubenswrapper[4786]: I1209 08:46:35.745991 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7561e76b-41b7-4e6a-8983-bb6d16a069d3-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:46:39 crc kubenswrapper[4786]: I1209 08:46:39.733177 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:39 crc kubenswrapper[4786]: I1209 08:46:39.739833 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e6f68306-ac39-4d61-8c27-12d69cc49a4f-metrics-certs\") pod \"network-metrics-daemon-v58s4\" (UID: \"e6f68306-ac39-4d61-8c27-12d69cc49a4f\") " pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:40 crc kubenswrapper[4786]: I1209 08:46:40.019975 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-v58s4" Dec 09 08:46:43 crc kubenswrapper[4786]: I1209 08:46:43.778971 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-6jxs9" Dec 09 08:46:44 crc kubenswrapper[4786]: I1209 08:46:44.350006 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:44 crc kubenswrapper[4786]: I1209 08:46:44.355059 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:46:45 crc kubenswrapper[4786]: I1209 08:46:45.295109 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:46:54 crc kubenswrapper[4786]: I1209 08:46:54.831236 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6j88p" Dec 09 08:46:54 crc kubenswrapper[4786]: I1209 08:46:54.988910 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:46:54 crc kubenswrapper[4786]: I1209 08:46:54.988998 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.824450 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 08:47:02 crc kubenswrapper[4786]: E1209 08:47:02.825391 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="847d588a-e346-406f-b901-c43cbe2bbf2d" containerName="pruner" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.825407 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="847d588a-e346-406f-b901-c43cbe2bbf2d" containerName="pruner" Dec 09 08:47:02 crc kubenswrapper[4786]: E1209 08:47:02.825441 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7561e76b-41b7-4e6a-8983-bb6d16a069d3" containerName="pruner" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.825450 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7561e76b-41b7-4e6a-8983-bb6d16a069d3" containerName="pruner" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.825614 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="847d588a-e346-406f-b901-c43cbe2bbf2d" containerName="pruner" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.825630 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7561e76b-41b7-4e6a-8983-bb6d16a069d3" containerName="pruner" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.826114 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.829255 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.829779 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.832561 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.896737 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.896806 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.998166 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.998241 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:02 crc kubenswrapper[4786]: I1209 08:47:02.998331 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:03 crc kubenswrapper[4786]: I1209 08:47:03.031131 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:03 crc kubenswrapper[4786]: I1209 08:47:03.154768 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:05 crc kubenswrapper[4786]: I1209 08:47:05.341654 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 09 08:47:05 crc kubenswrapper[4786]: E1209 08:47:05.960376 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 08:47:05 crc kubenswrapper[4786]: E1209 08:47:05.960997 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tf9kc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-c2h4k_openshift-marketplace(427c77e2-5f54-46b4-b2e5-ed6034c20d49): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 08:47:05 crc kubenswrapper[4786]: E1209 08:47:05.962201 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-c2h4k" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" Dec 09 08:47:05 crc kubenswrapper[4786]: E1209 08:47:05.979258 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 09 08:47:05 crc kubenswrapper[4786]: E1209 08:47:05.979527 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8rdd8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4rm92_openshift-marketplace(7388aae9-507a-42ff-84cb-9860de1f9f84): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 08:47:05 crc kubenswrapper[4786]: E1209 08:47:05.980763 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4rm92" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" Dec 09 08:47:07 crc kubenswrapper[4786]: E1209 08:47:07.057862 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4rm92" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" Dec 09 08:47:07 crc kubenswrapper[4786]: E1209 08:47:07.058059 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-c2h4k" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" Dec 09 08:47:07 crc kubenswrapper[4786]: E1209 08:47:07.131968 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 08:47:07 crc kubenswrapper[4786]: E1209 08:47:07.132154 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l4jdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-9274k_openshift-marketplace(621ad974-644c-45fc-a6ba-045ca1f9e033): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 08:47:07 crc kubenswrapper[4786]: E1209 08:47:07.133376 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-9274k" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.409945 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.411378 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.434384 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.484782 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11cf143d-97ba-47c9-ab66-df928f596a42-kube-api-access\") pod \"installer-9-crc\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.484886 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-kubelet-dir\") pod \"installer-9-crc\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.484908 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-var-lock\") pod \"installer-9-crc\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: E1209 08:47:08.545743 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-9274k" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.586239 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-kubelet-dir\") pod \"installer-9-crc\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.586304 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-var-lock\") pod \"installer-9-crc\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.586342 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11cf143d-97ba-47c9-ab66-df928f596a42-kube-api-access\") pod \"installer-9-crc\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.586361 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-kubelet-dir\") pod \"installer-9-crc\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.586410 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-var-lock\") pod \"installer-9-crc\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.604134 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11cf143d-97ba-47c9-ab66-df928f596a42-kube-api-access\") pod \"installer-9-crc\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:08 crc kubenswrapper[4786]: E1209 08:47:08.609738 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 08:47:08 crc kubenswrapper[4786]: E1209 08:47:08.609916 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxbz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-g4qx2_openshift-marketplace(7a4abbd7-999e-4b15-bfc1-a93939734b36): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 08:47:08 crc kubenswrapper[4786]: E1209 08:47:08.611690 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-g4qx2" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" Dec 09 08:47:08 crc kubenswrapper[4786]: E1209 08:47:08.613227 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 09 08:47:08 crc kubenswrapper[4786]: E1209 08:47:08.613579 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7qfr8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-pfbdd_openshift-marketplace(37a8cb2b-b5e5-408c-9bb9-a26f96805bf3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 08:47:08 crc kubenswrapper[4786]: E1209 08:47:08.614924 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-pfbdd" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" Dec 09 08:47:08 crc kubenswrapper[4786]: I1209 08:47:08.738983 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.636320 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-pfbdd" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.636381 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-g4qx2" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.720600 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.720888 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zqst,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4pq6p_openshift-marketplace(8fa34078-7115-46e8-9c2f-c82730cf3b41): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.722303 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-4pq6p" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.746760 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.746953 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2vhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-d9b7s_openshift-marketplace(7d9968ce-71c9-4b6d-912f-5f03be10945d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.748361 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-d9b7s" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.759844 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.760051 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvtx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-mtcjp_openshift-marketplace(2dbf4a5b-0892-4445-b9d0-972b3187c8ee): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.761756 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-mtcjp" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.900732 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4pq6p" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.902210 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-mtcjp" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" Dec 09 08:47:11 crc kubenswrapper[4786]: E1209 08:47:11.902227 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-d9b7s" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" Dec 09 08:47:11 crc kubenswrapper[4786]: I1209 08:47:11.961301 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 09 08:47:11 crc kubenswrapper[4786]: W1209 08:47:11.965281 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod11cf143d_97ba_47c9_ab66_df928f596a42.slice/crio-ffe1ea0fb1d3d3132a3e3c6f647a6b1972e7756f08d226cdec903231538fcfcd WatchSource:0}: Error finding container ffe1ea0fb1d3d3132a3e3c6f647a6b1972e7756f08d226cdec903231538fcfcd: Status 404 returned error can't find the container with id ffe1ea0fb1d3d3132a3e3c6f647a6b1972e7756f08d226cdec903231538fcfcd Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.070519 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-v58s4"] Dec 09 08:47:12 crc kubenswrapper[4786]: W1209 08:47:12.075522 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6f68306_ac39_4d61_8c27_12d69cc49a4f.slice/crio-568c81f0834f66e7ade8d760a44c84f6b531c659f49a6ab2ac53570d0e76065e WatchSource:0}: Error finding container 568c81f0834f66e7ade8d760a44c84f6b531c659f49a6ab2ac53570d0e76065e: Status 404 returned error can't find the container with id 568c81f0834f66e7ade8d760a44c84f6b531c659f49a6ab2ac53570d0e76065e Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.115559 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.906561 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v58s4" event={"ID":"e6f68306-ac39-4d61-8c27-12d69cc49a4f","Type":"ContainerStarted","Data":"f906cfeeb2ee934e6a80d9f25314eaa5c1de6ca3a5a74e1ae5c53d5fb156c2d9"} Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.906913 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v58s4" event={"ID":"e6f68306-ac39-4d61-8c27-12d69cc49a4f","Type":"ContainerStarted","Data":"3fdc3d64120877cfb486b782d8da0edfffbd6968d06f1957c5a5cf2854ac7f3a"} Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.906926 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-v58s4" event={"ID":"e6f68306-ac39-4d61-8c27-12d69cc49a4f","Type":"ContainerStarted","Data":"568c81f0834f66e7ade8d760a44c84f6b531c659f49a6ab2ac53570d0e76065e"} Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.910355 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5d0314ea-cf72-4896-8b8d-a97c91b119d8","Type":"ContainerStarted","Data":"dcfd775cdf0ead7e7bab110ade05af90798ae8bc3d755624e198d6dd123dd4b8"} Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.910406 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5d0314ea-cf72-4896-8b8d-a97c91b119d8","Type":"ContainerStarted","Data":"df7945642f121c1e6e4fdbaebdbf4fe3dc9ce6ff95a851f9c2ef053fb28930d4"} Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.912233 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"11cf143d-97ba-47c9-ab66-df928f596a42","Type":"ContainerStarted","Data":"4356f032ad73376bd0c6a4748e7a6f793057cbecf14f2b9a9442738aa7c1ac16"} Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.912257 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"11cf143d-97ba-47c9-ab66-df928f596a42","Type":"ContainerStarted","Data":"ffe1ea0fb1d3d3132a3e3c6f647a6b1972e7756f08d226cdec903231538fcfcd"} Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.939128 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-v58s4" podStartSLOduration=175.939049273 podStartE2EDuration="2m55.939049273s" podCreationTimestamp="2025-12-09 08:44:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:47:12.932816031 +0000 UTC m=+198.816437277" watchObservedRunningTime="2025-12-09 08:47:12.939049273 +0000 UTC m=+198.822670499" Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.951945 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=4.95192543 podStartE2EDuration="4.95192543s" podCreationTimestamp="2025-12-09 08:47:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:47:12.95192011 +0000 UTC m=+198.835541346" watchObservedRunningTime="2025-12-09 08:47:12.95192543 +0000 UTC m=+198.835546656" Dec 09 08:47:12 crc kubenswrapper[4786]: I1209 08:47:12.969555 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=10.969532151 podStartE2EDuration="10.969532151s" podCreationTimestamp="2025-12-09 08:47:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:47:12.966105561 +0000 UTC m=+198.849726787" watchObservedRunningTime="2025-12-09 08:47:12.969532151 +0000 UTC m=+198.853153387" Dec 09 08:47:13 crc kubenswrapper[4786]: I1209 08:47:13.918514 4786 generic.go:334] "Generic (PLEG): container finished" podID="5d0314ea-cf72-4896-8b8d-a97c91b119d8" containerID="dcfd775cdf0ead7e7bab110ade05af90798ae8bc3d755624e198d6dd123dd4b8" exitCode=0 Dec 09 08:47:13 crc kubenswrapper[4786]: I1209 08:47:13.918622 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5d0314ea-cf72-4896-8b8d-a97c91b119d8","Type":"ContainerDied","Data":"dcfd775cdf0ead7e7bab110ade05af90798ae8bc3d755624e198d6dd123dd4b8"} Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.203357 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.289034 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kube-api-access\") pod \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\" (UID: \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\") " Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.289133 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kubelet-dir\") pod \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\" (UID: \"5d0314ea-cf72-4896-8b8d-a97c91b119d8\") " Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.289405 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5d0314ea-cf72-4896-8b8d-a97c91b119d8" (UID: "5d0314ea-cf72-4896-8b8d-a97c91b119d8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.289814 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.297322 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5d0314ea-cf72-4896-8b8d-a97c91b119d8" (UID: "5d0314ea-cf72-4896-8b8d-a97c91b119d8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.390829 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5d0314ea-cf72-4896-8b8d-a97c91b119d8-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.944665 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"5d0314ea-cf72-4896-8b8d-a97c91b119d8","Type":"ContainerDied","Data":"df7945642f121c1e6e4fdbaebdbf4fe3dc9ce6ff95a851f9c2ef053fb28930d4"} Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.944722 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df7945642f121c1e6e4fdbaebdbf4fe3dc9ce6ff95a851f9c2ef053fb28930d4" Dec 09 08:47:15 crc kubenswrapper[4786]: I1209 08:47:15.945354 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 09 08:47:21 crc kubenswrapper[4786]: I1209 08:47:21.980621 4786 generic.go:334] "Generic (PLEG): container finished" podID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerID="4d351603fc7f5943141007f40b1b6a6f312657caf8180926f23c06c2ce828c5e" exitCode=0 Dec 09 08:47:21 crc kubenswrapper[4786]: I1209 08:47:21.980752 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2h4k" event={"ID":"427c77e2-5f54-46b4-b2e5-ed6034c20d49","Type":"ContainerDied","Data":"4d351603fc7f5943141007f40b1b6a6f312657caf8180926f23c06c2ce828c5e"} Dec 09 08:47:21 crc kubenswrapper[4786]: I1209 08:47:21.984093 4786 generic.go:334] "Generic (PLEG): container finished" podID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerID="e31046b9edfb0c0229ddbfabc27b997e03514ae2ae117811270863b86b123bb5" exitCode=0 Dec 09 08:47:21 crc kubenswrapper[4786]: I1209 08:47:21.984143 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9274k" event={"ID":"621ad974-644c-45fc-a6ba-045ca1f9e033","Type":"ContainerDied","Data":"e31046b9edfb0c0229ddbfabc27b997e03514ae2ae117811270863b86b123bb5"} Dec 09 08:47:22 crc kubenswrapper[4786]: I1209 08:47:22.993708 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2h4k" event={"ID":"427c77e2-5f54-46b4-b2e5-ed6034c20d49","Type":"ContainerStarted","Data":"5a309eb0d06774889f699e96e45c92fdeea3dfe0b8d04578f23a281c71412014"} Dec 09 08:47:22 crc kubenswrapper[4786]: I1209 08:47:22.995865 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9274k" event={"ID":"621ad974-644c-45fc-a6ba-045ca1f9e033","Type":"ContainerStarted","Data":"7c0b69be8677e792650faf38d3fb25a65f450e824cd8cc8afeb22bc8b5de2f8d"} Dec 09 08:47:23 crc kubenswrapper[4786]: I1209 08:47:23.020985 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-c2h4k" podStartSLOduration=3.561514356 podStartE2EDuration="59.020962891s" podCreationTimestamp="2025-12-09 08:46:24 +0000 UTC" firstStartedPulling="2025-12-09 08:46:26.932248335 +0000 UTC m=+152.815869561" lastFinishedPulling="2025-12-09 08:47:22.39169686 +0000 UTC m=+208.275318096" observedRunningTime="2025-12-09 08:47:23.015886938 +0000 UTC m=+208.899508164" watchObservedRunningTime="2025-12-09 08:47:23.020962891 +0000 UTC m=+208.904584117" Dec 09 08:47:23 crc kubenswrapper[4786]: I1209 08:47:23.034131 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9274k" podStartSLOduration=3.79645828 podStartE2EDuration="57.034108904s" podCreationTimestamp="2025-12-09 08:46:26 +0000 UTC" firstStartedPulling="2025-12-09 08:46:29.304334178 +0000 UTC m=+155.187955414" lastFinishedPulling="2025-12-09 08:47:22.541984812 +0000 UTC m=+208.425606038" observedRunningTime="2025-12-09 08:47:23.032887682 +0000 UTC m=+208.916508898" watchObservedRunningTime="2025-12-09 08:47:23.034108904 +0000 UTC m=+208.917730130" Dec 09 08:47:24 crc kubenswrapper[4786]: I1209 08:47:24.989476 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:47:24 crc kubenswrapper[4786]: I1209 08:47:24.989876 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:47:24 crc kubenswrapper[4786]: I1209 08:47:24.989976 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:47:24 crc kubenswrapper[4786]: I1209 08:47:24.990717 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 08:47:24 crc kubenswrapper[4786]: I1209 08:47:24.990861 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5" gracePeriod=600 Dec 09 08:47:25 crc kubenswrapper[4786]: I1209 08:47:25.327191 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:47:25 crc kubenswrapper[4786]: I1209 08:47:25.327278 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:47:25 crc kubenswrapper[4786]: I1209 08:47:25.490380 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:47:26 crc kubenswrapper[4786]: I1209 08:47:26.019614 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rm92" event={"ID":"7388aae9-507a-42ff-84cb-9860de1f9f84","Type":"ContainerStarted","Data":"2eb4676477cca01b31f68ff80bda42d993c14f73a95c0bce76b454872c251609"} Dec 09 08:47:26 crc kubenswrapper[4786]: I1209 08:47:26.027117 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5" exitCode=0 Dec 09 08:47:26 crc kubenswrapper[4786]: I1209 08:47:26.028078 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5"} Dec 09 08:47:26 crc kubenswrapper[4786]: I1209 08:47:26.701754 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:47:26 crc kubenswrapper[4786]: I1209 08:47:26.701962 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:47:26 crc kubenswrapper[4786]: I1209 08:47:26.791930 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:47:27 crc kubenswrapper[4786]: I1209 08:47:27.034160 4786 generic.go:334] "Generic (PLEG): container finished" podID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerID="2eb4676477cca01b31f68ff80bda42d993c14f73a95c0bce76b454872c251609" exitCode=0 Dec 09 08:47:27 crc kubenswrapper[4786]: I1209 08:47:27.034255 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rm92" event={"ID":"7388aae9-507a-42ff-84cb-9860de1f9f84","Type":"ContainerDied","Data":"2eb4676477cca01b31f68ff80bda42d993c14f73a95c0bce76b454872c251609"} Dec 09 08:47:27 crc kubenswrapper[4786]: I1209 08:47:27.038416 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"29646cad7eaa0024c8d0fa3ab2acef12d4e3de7d8dcb6aee6b00df4225f53d60"} Dec 09 08:47:27 crc kubenswrapper[4786]: I1209 08:47:27.080395 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.055828 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rm92" event={"ID":"7388aae9-507a-42ff-84cb-9860de1f9f84","Type":"ContainerStarted","Data":"254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8"} Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.058442 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pq6p" event={"ID":"8fa34078-7115-46e8-9c2f-c82730cf3b41","Type":"ContainerStarted","Data":"20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b"} Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.060106 4786 generic.go:334] "Generic (PLEG): container finished" podID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerID="3811ac7fc898903acb1e71d01fc804a2222455bffb79bba3bcb8a8fe81e2dea9" exitCode=0 Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.060150 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4qx2" event={"ID":"7a4abbd7-999e-4b15-bfc1-a93939734b36","Type":"ContainerDied","Data":"3811ac7fc898903acb1e71d01fc804a2222455bffb79bba3bcb8a8fe81e2dea9"} Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.062155 4786 generic.go:334] "Generic (PLEG): container finished" podID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerID="605965504a6636cfac4a886ac5353a2904aedbc323b17df5c5974c1ece3e3fd4" exitCode=0 Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.062194 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfbdd" event={"ID":"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3","Type":"ContainerDied","Data":"605965504a6636cfac4a886ac5353a2904aedbc323b17df5c5974c1ece3e3fd4"} Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.064127 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9b7s" event={"ID":"7d9968ce-71c9-4b6d-912f-5f03be10945d","Type":"ContainerStarted","Data":"814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b"} Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.065883 4786 generic.go:334] "Generic (PLEG): container finished" podID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerID="3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131" exitCode=0 Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.065906 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtcjp" event={"ID":"2dbf4a5b-0892-4445-b9d0-972b3187c8ee","Type":"ContainerDied","Data":"3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131"} Dec 09 08:47:30 crc kubenswrapper[4786]: I1209 08:47:30.138577 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4rm92" podStartSLOduration=4.169068749 podStartE2EDuration="1m6.138554369s" podCreationTimestamp="2025-12-09 08:46:24 +0000 UTC" firstStartedPulling="2025-12-09 08:46:26.96026369 +0000 UTC m=+152.843884916" lastFinishedPulling="2025-12-09 08:47:28.92974931 +0000 UTC m=+214.813370536" observedRunningTime="2025-12-09 08:47:30.118233618 +0000 UTC m=+216.001854834" watchObservedRunningTime="2025-12-09 08:47:30.138554369 +0000 UTC m=+216.022175595" Dec 09 08:47:31 crc kubenswrapper[4786]: I1209 08:47:31.073939 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4qx2" event={"ID":"7a4abbd7-999e-4b15-bfc1-a93939734b36","Type":"ContainerStarted","Data":"da14222d5521c04bcdb066364819347a664312f01a73d8394cbd8fa1e5b9c751"} Dec 09 08:47:31 crc kubenswrapper[4786]: I1209 08:47:31.075999 4786 generic.go:334] "Generic (PLEG): container finished" podID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerID="814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b" exitCode=0 Dec 09 08:47:31 crc kubenswrapper[4786]: I1209 08:47:31.076050 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9b7s" event={"ID":"7d9968ce-71c9-4b6d-912f-5f03be10945d","Type":"ContainerDied","Data":"814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b"} Dec 09 08:47:31 crc kubenswrapper[4786]: I1209 08:47:31.080926 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtcjp" event={"ID":"2dbf4a5b-0892-4445-b9d0-972b3187c8ee","Type":"ContainerStarted","Data":"4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68"} Dec 09 08:47:31 crc kubenswrapper[4786]: I1209 08:47:31.084018 4786 generic.go:334] "Generic (PLEG): container finished" podID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerID="20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b" exitCode=0 Dec 09 08:47:31 crc kubenswrapper[4786]: I1209 08:47:31.084080 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pq6p" event={"ID":"8fa34078-7115-46e8-9c2f-c82730cf3b41","Type":"ContainerDied","Data":"20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b"} Dec 09 08:47:31 crc kubenswrapper[4786]: I1209 08:47:31.133718 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mtcjp" podStartSLOduration=5.8477601329999995 podStartE2EDuration="1m7.133683349s" podCreationTimestamp="2025-12-09 08:46:24 +0000 UTC" firstStartedPulling="2025-12-09 08:46:29.306580097 +0000 UTC m=+155.190201323" lastFinishedPulling="2025-12-09 08:47:30.592503313 +0000 UTC m=+216.476124539" observedRunningTime="2025-12-09 08:47:31.129889299 +0000 UTC m=+217.013510525" watchObservedRunningTime="2025-12-09 08:47:31.133683349 +0000 UTC m=+217.017304575" Dec 09 08:47:32 crc kubenswrapper[4786]: I1209 08:47:32.091873 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pq6p" event={"ID":"8fa34078-7115-46e8-9c2f-c82730cf3b41","Type":"ContainerStarted","Data":"3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b"} Dec 09 08:47:32 crc kubenswrapper[4786]: I1209 08:47:32.094366 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfbdd" event={"ID":"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3","Type":"ContainerStarted","Data":"b913b90d28b514e4659a7d62eb51c06d646e011b2b2c7a36fb195f051928deb9"} Dec 09 08:47:32 crc kubenswrapper[4786]: I1209 08:47:32.096451 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9b7s" event={"ID":"7d9968ce-71c9-4b6d-912f-5f03be10945d","Type":"ContainerStarted","Data":"063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3"} Dec 09 08:47:32 crc kubenswrapper[4786]: I1209 08:47:32.137751 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pfbdd" podStartSLOduration=5.8271156 podStartE2EDuration="1m6.137724522s" podCreationTimestamp="2025-12-09 08:46:26 +0000 UTC" firstStartedPulling="2025-12-09 08:46:30.379943015 +0000 UTC m=+156.263564241" lastFinishedPulling="2025-12-09 08:47:30.690551937 +0000 UTC m=+216.574173163" observedRunningTime="2025-12-09 08:47:32.136088059 +0000 UTC m=+218.019709295" watchObservedRunningTime="2025-12-09 08:47:32.137724522 +0000 UTC m=+218.021345748" Dec 09 08:47:32 crc kubenswrapper[4786]: I1209 08:47:32.140970 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4pq6p" podStartSLOduration=4.042010542 podStartE2EDuration="1m5.140958926s" podCreationTimestamp="2025-12-09 08:46:27 +0000 UTC" firstStartedPulling="2025-12-09 08:46:30.375844648 +0000 UTC m=+156.259465874" lastFinishedPulling="2025-12-09 08:47:31.474793022 +0000 UTC m=+217.358414258" observedRunningTime="2025-12-09 08:47:32.116977189 +0000 UTC m=+218.000598425" watchObservedRunningTime="2025-12-09 08:47:32.140958926 +0000 UTC m=+218.024580172" Dec 09 08:47:32 crc kubenswrapper[4786]: I1209 08:47:32.164582 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d9b7s" podStartSLOduration=4.118919693 podStartE2EDuration="1m5.164551954s" podCreationTimestamp="2025-12-09 08:46:27 +0000 UTC" firstStartedPulling="2025-12-09 08:46:30.390361596 +0000 UTC m=+156.273982822" lastFinishedPulling="2025-12-09 08:47:31.435993847 +0000 UTC m=+217.319615083" observedRunningTime="2025-12-09 08:47:32.158741332 +0000 UTC m=+218.042362578" watchObservedRunningTime="2025-12-09 08:47:32.164551954 +0000 UTC m=+218.048173190" Dec 09 08:47:34 crc kubenswrapper[4786]: I1209 08:47:34.631989 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:47:34 crc kubenswrapper[4786]: I1209 08:47:34.633572 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:47:34 crc kubenswrapper[4786]: I1209 08:47:34.757263 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:47:34 crc kubenswrapper[4786]: I1209 08:47:34.781666 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g4qx2" podStartSLOduration=7.141077794 podStartE2EDuration="1m10.78164227s" podCreationTimestamp="2025-12-09 08:46:24 +0000 UTC" firstStartedPulling="2025-12-09 08:46:26.940246331 +0000 UTC m=+152.823867557" lastFinishedPulling="2025-12-09 08:47:30.580810807 +0000 UTC m=+216.464432033" observedRunningTime="2025-12-09 08:47:32.184897506 +0000 UTC m=+218.068518742" watchObservedRunningTime="2025-12-09 08:47:34.78164227 +0000 UTC m=+220.665263486" Dec 09 08:47:35 crc kubenswrapper[4786]: I1209 08:47:35.173598 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:47:35 crc kubenswrapper[4786]: I1209 08:47:35.173691 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:47:35 crc kubenswrapper[4786]: I1209 08:47:35.200403 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:47:35 crc kubenswrapper[4786]: I1209 08:47:35.236008 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:47:35 crc kubenswrapper[4786]: I1209 08:47:35.272349 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:47:35 crc kubenswrapper[4786]: I1209 08:47:35.272453 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:47:35 crc kubenswrapper[4786]: I1209 08:47:35.315343 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:47:35 crc kubenswrapper[4786]: I1209 08:47:35.371676 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:47:36 crc kubenswrapper[4786]: I1209 08:47:36.164403 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:47:36 crc kubenswrapper[4786]: I1209 08:47:36.174056 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:47:37 crc kubenswrapper[4786]: I1209 08:47:37.476894 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:47:37 crc kubenswrapper[4786]: I1209 08:47:37.478127 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:47:37 crc kubenswrapper[4786]: I1209 08:47:37.571087 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:47:37 crc kubenswrapper[4786]: I1209 08:47:37.618594 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtcjp"] Dec 09 08:47:37 crc kubenswrapper[4786]: I1209 08:47:37.818709 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:47:37 crc kubenswrapper[4786]: I1209 08:47:37.818773 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:47:37 crc kubenswrapper[4786]: I1209 08:47:37.869200 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:47:38 crc kubenswrapper[4786]: I1209 08:47:38.132758 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mtcjp" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerName="registry-server" containerID="cri-o://4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68" gracePeriod=2 Dec 09 08:47:38 crc kubenswrapper[4786]: I1209 08:47:38.170780 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:47:38 crc kubenswrapper[4786]: I1209 08:47:38.186719 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:47:38 crc kubenswrapper[4786]: I1209 08:47:38.216835 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:47:38 crc kubenswrapper[4786]: I1209 08:47:38.216947 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:47:38 crc kubenswrapper[4786]: I1209 08:47:38.256521 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.031150 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.128859 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-utilities\") pod \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.129026 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvtx8\" (UniqueName: \"kubernetes.io/projected/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-kube-api-access-jvtx8\") pod \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.129083 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-catalog-content\") pod \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\" (UID: \"2dbf4a5b-0892-4445-b9d0-972b3187c8ee\") " Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.130452 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-utilities" (OuterVolumeSpecName: "utilities") pod "2dbf4a5b-0892-4445-b9d0-972b3187c8ee" (UID: "2dbf4a5b-0892-4445-b9d0-972b3187c8ee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.134225 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-kube-api-access-jvtx8" (OuterVolumeSpecName: "kube-api-access-jvtx8") pod "2dbf4a5b-0892-4445-b9d0-972b3187c8ee" (UID: "2dbf4a5b-0892-4445-b9d0-972b3187c8ee"). InnerVolumeSpecName "kube-api-access-jvtx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.143798 4786 generic.go:334] "Generic (PLEG): container finished" podID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerID="4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68" exitCode=0 Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.143870 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtcjp" event={"ID":"2dbf4a5b-0892-4445-b9d0-972b3187c8ee","Type":"ContainerDied","Data":"4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68"} Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.143926 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mtcjp" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.143955 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mtcjp" event={"ID":"2dbf4a5b-0892-4445-b9d0-972b3187c8ee","Type":"ContainerDied","Data":"e6a75af7a5a1dd0bcb3c9bb6ed402b1b2665ffda5862db6425665c5774959092"} Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.144025 4786 scope.go:117] "RemoveContainer" containerID="4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.166210 4786 scope.go:117] "RemoveContainer" containerID="3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.183023 4786 scope.go:117] "RemoveContainer" containerID="f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.208808 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2dbf4a5b-0892-4445-b9d0-972b3187c8ee" (UID: "2dbf4a5b-0892-4445-b9d0-972b3187c8ee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.211016 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.212527 4786 scope.go:117] "RemoveContainer" containerID="4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68" Dec 09 08:47:39 crc kubenswrapper[4786]: E1209 08:47:39.213879 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68\": container with ID starting with 4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68 not found: ID does not exist" containerID="4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.213943 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68"} err="failed to get container status \"4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68\": rpc error: code = NotFound desc = could not find container \"4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68\": container with ID starting with 4122b57d0742e97b3e837f6f7218942d1186c5eab509f128d3a05557ebfbbc68 not found: ID does not exist" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.213985 4786 scope.go:117] "RemoveContainer" containerID="3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131" Dec 09 08:47:39 crc kubenswrapper[4786]: E1209 08:47:39.214499 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131\": container with ID starting with 3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131 not found: ID does not exist" containerID="3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.214558 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131"} err="failed to get container status \"3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131\": rpc error: code = NotFound desc = could not find container \"3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131\": container with ID starting with 3a586c3c826024f6b8aade42224f3559740da5df4476132d0193c74e76445131 not found: ID does not exist" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.214597 4786 scope.go:117] "RemoveContainer" containerID="f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814" Dec 09 08:47:39 crc kubenswrapper[4786]: E1209 08:47:39.215148 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814\": container with ID starting with f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814 not found: ID does not exist" containerID="f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.215218 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814"} err="failed to get container status \"f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814\": rpc error: code = NotFound desc = could not find container \"f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814\": container with ID starting with f2a55ed45c78cf4d25672cb30292603b138dc2b9a57095a832edb4484689f814 not found: ID does not exist" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.238470 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.238611 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvtx8\" (UniqueName: \"kubernetes.io/projected/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-kube-api-access-jvtx8\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.238646 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2dbf4a5b-0892-4445-b9d0-972b3187c8ee-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.425049 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2h4k"] Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.425372 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-c2h4k" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerName="registry-server" containerID="cri-o://5a309eb0d06774889f699e96e45c92fdeea3dfe0b8d04578f23a281c71412014" gracePeriod=2 Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.484574 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mtcjp"] Dec 09 08:47:39 crc kubenswrapper[4786]: I1209 08:47:39.485700 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mtcjp"] Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.024477 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfbdd"] Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.157451 4786 generic.go:334] "Generic (PLEG): container finished" podID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerID="5a309eb0d06774889f699e96e45c92fdeea3dfe0b8d04578f23a281c71412014" exitCode=0 Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.157472 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2h4k" event={"ID":"427c77e2-5f54-46b4-b2e5-ed6034c20d49","Type":"ContainerDied","Data":"5a309eb0d06774889f699e96e45c92fdeea3dfe0b8d04578f23a281c71412014"} Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.334078 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.463734 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf9kc\" (UniqueName: \"kubernetes.io/projected/427c77e2-5f54-46b4-b2e5-ed6034c20d49-kube-api-access-tf9kc\") pod \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.463844 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-catalog-content\") pod \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.463871 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-utilities\") pod \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\" (UID: \"427c77e2-5f54-46b4-b2e5-ed6034c20d49\") " Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.465647 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-utilities" (OuterVolumeSpecName: "utilities") pod "427c77e2-5f54-46b4-b2e5-ed6034c20d49" (UID: "427c77e2-5f54-46b4-b2e5-ed6034c20d49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.471460 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/427c77e2-5f54-46b4-b2e5-ed6034c20d49-kube-api-access-tf9kc" (OuterVolumeSpecName: "kube-api-access-tf9kc") pod "427c77e2-5f54-46b4-b2e5-ed6034c20d49" (UID: "427c77e2-5f54-46b4-b2e5-ed6034c20d49"). InnerVolumeSpecName "kube-api-access-tf9kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.534495 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "427c77e2-5f54-46b4-b2e5-ed6034c20d49" (UID: "427c77e2-5f54-46b4-b2e5-ed6034c20d49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.565794 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf9kc\" (UniqueName: \"kubernetes.io/projected/427c77e2-5f54-46b4-b2e5-ed6034c20d49-kube-api-access-tf9kc\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.565839 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:40 crc kubenswrapper[4786]: I1209 08:47:40.565849 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/427c77e2-5f54-46b4-b2e5-ed6034c20d49-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:41 crc kubenswrapper[4786]: I1209 08:47:41.165308 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-c2h4k" event={"ID":"427c77e2-5f54-46b4-b2e5-ed6034c20d49","Type":"ContainerDied","Data":"9cfe51389b7ea2aac50adbfc92ee9c81fa769ea9a780056208ab3d1376b6c404"} Dec 09 08:47:41 crc kubenswrapper[4786]: I1209 08:47:41.165397 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-c2h4k" Dec 09 08:47:41 crc kubenswrapper[4786]: I1209 08:47:41.165669 4786 scope.go:117] "RemoveContainer" containerID="5a309eb0d06774889f699e96e45c92fdeea3dfe0b8d04578f23a281c71412014" Dec 09 08:47:41 crc kubenswrapper[4786]: I1209 08:47:41.166120 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pfbdd" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerName="registry-server" containerID="cri-o://b913b90d28b514e4659a7d62eb51c06d646e011b2b2c7a36fb195f051928deb9" gracePeriod=2 Dec 09 08:47:41 crc kubenswrapper[4786]: I1209 08:47:41.192320 4786 scope.go:117] "RemoveContainer" containerID="4d351603fc7f5943141007f40b1b6a6f312657caf8180926f23c06c2ce828c5e" Dec 09 08:47:41 crc kubenswrapper[4786]: I1209 08:47:41.199933 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" path="/var/lib/kubelet/pods/2dbf4a5b-0892-4445-b9d0-972b3187c8ee/volumes" Dec 09 08:47:41 crc kubenswrapper[4786]: I1209 08:47:41.201877 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-c2h4k"] Dec 09 08:47:41 crc kubenswrapper[4786]: I1209 08:47:41.205507 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-c2h4k"] Dec 09 08:47:41 crc kubenswrapper[4786]: I1209 08:47:41.226493 4786 scope.go:117] "RemoveContainer" containerID="20affb1f3cefdc9738cd7182997aa73ed3637b2f841cd4cab3b1faca85e9cb6f" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.178822 4786 generic.go:334] "Generic (PLEG): container finished" podID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerID="b913b90d28b514e4659a7d62eb51c06d646e011b2b2c7a36fb195f051928deb9" exitCode=0 Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.178892 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfbdd" event={"ID":"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3","Type":"ContainerDied","Data":"b913b90d28b514e4659a7d62eb51c06d646e011b2b2c7a36fb195f051928deb9"} Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.417887 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4pq6p"] Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.418634 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4pq6p" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerName="registry-server" containerID="cri-o://3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b" gracePeriod=2 Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.441326 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.499760 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-catalog-content\") pod \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.499887 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-utilities\") pod \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.499917 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qfr8\" (UniqueName: \"kubernetes.io/projected/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-kube-api-access-7qfr8\") pod \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\" (UID: \"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3\") " Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.501509 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-utilities" (OuterVolumeSpecName: "utilities") pod "37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" (UID: "37a8cb2b-b5e5-408c-9bb9-a26f96805bf3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.506511 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-kube-api-access-7qfr8" (OuterVolumeSpecName: "kube-api-access-7qfr8") pod "37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" (UID: "37a8cb2b-b5e5-408c-9bb9-a26f96805bf3"). InnerVolumeSpecName "kube-api-access-7qfr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.520232 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" (UID: "37a8cb2b-b5e5-408c-9bb9-a26f96805bf3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.600913 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.600970 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qfr8\" (UniqueName: \"kubernetes.io/projected/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-kube-api-access-7qfr8\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.600987 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.790804 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.802812 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-catalog-content\") pod \"8fa34078-7115-46e8-9c2f-c82730cf3b41\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.802873 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zqst\" (UniqueName: \"kubernetes.io/projected/8fa34078-7115-46e8-9c2f-c82730cf3b41-kube-api-access-7zqst\") pod \"8fa34078-7115-46e8-9c2f-c82730cf3b41\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.802941 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-utilities\") pod \"8fa34078-7115-46e8-9c2f-c82730cf3b41\" (UID: \"8fa34078-7115-46e8-9c2f-c82730cf3b41\") " Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.803750 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-utilities" (OuterVolumeSpecName: "utilities") pod "8fa34078-7115-46e8-9c2f-c82730cf3b41" (UID: "8fa34078-7115-46e8-9c2f-c82730cf3b41"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.804074 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.815739 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8fa34078-7115-46e8-9c2f-c82730cf3b41-kube-api-access-7zqst" (OuterVolumeSpecName: "kube-api-access-7zqst") pod "8fa34078-7115-46e8-9c2f-c82730cf3b41" (UID: "8fa34078-7115-46e8-9c2f-c82730cf3b41"). InnerVolumeSpecName "kube-api-access-7zqst". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.905767 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zqst\" (UniqueName: \"kubernetes.io/projected/8fa34078-7115-46e8-9c2f-c82730cf3b41-kube-api-access-7zqst\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:42 crc kubenswrapper[4786]: I1209 08:47:42.934573 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8fa34078-7115-46e8-9c2f-c82730cf3b41" (UID: "8fa34078-7115-46e8-9c2f-c82730cf3b41"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.006539 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8fa34078-7115-46e8-9c2f-c82730cf3b41-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.188475 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4pq6p" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.188303 4786 generic.go:334] "Generic (PLEG): container finished" podID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerID="3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b" exitCode=0 Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.195105 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pfbdd" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.196276 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" path="/var/lib/kubelet/pods/427c77e2-5f54-46b4-b2e5-ed6034c20d49/volumes" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.197273 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pq6p" event={"ID":"8fa34078-7115-46e8-9c2f-c82730cf3b41","Type":"ContainerDied","Data":"3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b"} Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.197323 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4pq6p" event={"ID":"8fa34078-7115-46e8-9c2f-c82730cf3b41","Type":"ContainerDied","Data":"2714b4e6cf93848794e7aac5406a2eb961d1dee0366c8048b0cc96e102e39dff"} Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.197341 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pfbdd" event={"ID":"37a8cb2b-b5e5-408c-9bb9-a26f96805bf3","Type":"ContainerDied","Data":"3a6fe8e46677a5a4aeeac3cf902e91f592a08e546e1a3f2b3f9a703aafd0c8ed"} Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.197462 4786 scope.go:117] "RemoveContainer" containerID="3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.234871 4786 scope.go:117] "RemoveContainer" containerID="20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.263395 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4pq6p"] Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.275039 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4pq6p"] Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.276871 4786 scope.go:117] "RemoveContainer" containerID="75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.283390 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfbdd"] Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.287113 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pfbdd"] Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.297499 4786 scope.go:117] "RemoveContainer" containerID="3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b" Dec 09 08:47:43 crc kubenswrapper[4786]: E1209 08:47:43.298229 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b\": container with ID starting with 3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b not found: ID does not exist" containerID="3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.298282 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b"} err="failed to get container status \"3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b\": rpc error: code = NotFound desc = could not find container \"3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b\": container with ID starting with 3d996f23c79be481191b06742c3a2ae8b9b06f43e66f6a23f8bc56d8a425df8b not found: ID does not exist" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.298321 4786 scope.go:117] "RemoveContainer" containerID="20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b" Dec 09 08:47:43 crc kubenswrapper[4786]: E1209 08:47:43.298674 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b\": container with ID starting with 20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b not found: ID does not exist" containerID="20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.298713 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b"} err="failed to get container status \"20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b\": rpc error: code = NotFound desc = could not find container \"20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b\": container with ID starting with 20339f4eb59c73344743d757020f9e0bd77e875ba23103978ad93f5c839e096b not found: ID does not exist" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.298739 4786 scope.go:117] "RemoveContainer" containerID="75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.298868 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-hk4xf"] Dec 09 08:47:43 crc kubenswrapper[4786]: E1209 08:47:43.299122 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e\": container with ID starting with 75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e not found: ID does not exist" containerID="75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.299147 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e"} err="failed to get container status \"75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e\": rpc error: code = NotFound desc = could not find container \"75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e\": container with ID starting with 75de1ef0d20d61d54b5bdd25b63053c5d7801c7f91a57167d8738a673504584e not found: ID does not exist" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.299163 4786 scope.go:117] "RemoveContainer" containerID="b913b90d28b514e4659a7d62eb51c06d646e011b2b2c7a36fb195f051928deb9" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.327711 4786 scope.go:117] "RemoveContainer" containerID="605965504a6636cfac4a886ac5353a2904aedbc323b17df5c5974c1ece3e3fd4" Dec 09 08:47:43 crc kubenswrapper[4786]: I1209 08:47:43.360751 4786 scope.go:117] "RemoveContainer" containerID="1cda6151b3557121bdc05b20707a326ca2de52c99334496810270b45767bb0d2" Dec 09 08:47:45 crc kubenswrapper[4786]: I1209 08:47:45.201503 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" path="/var/lib/kubelet/pods/37a8cb2b-b5e5-408c-9bb9-a26f96805bf3/volumes" Dec 09 08:47:45 crc kubenswrapper[4786]: I1209 08:47:45.203114 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" path="/var/lib/kubelet/pods/8fa34078-7115-46e8-9c2f-c82730cf3b41/volumes" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.006023 4786 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007084 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d0314ea-cf72-4896-8b8d-a97c91b119d8" containerName="pruner" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007110 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d0314ea-cf72-4896-8b8d-a97c91b119d8" containerName="pruner" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007126 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerName="extract-utilities" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007133 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerName="extract-utilities" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007143 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerName="extract-content" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007149 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerName="extract-content" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007157 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007163 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007176 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007182 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007192 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007199 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007209 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerName="extract-utilities" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007217 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerName="extract-utilities" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007228 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerName="extract-utilities" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007236 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerName="extract-utilities" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007247 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerName="extract-content" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007254 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerName="extract-content" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007271 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007280 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007295 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerName="extract-content" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007301 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerName="extract-content" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007313 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerName="extract-utilities" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007319 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerName="extract-utilities" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.007332 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerName="extract-content" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007338 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerName="extract-content" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007537 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="427c77e2-5f54-46b4-b2e5-ed6034c20d49" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007553 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8fa34078-7115-46e8-9c2f-c82730cf3b41" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007564 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a8cb2b-b5e5-408c-9bb9-a26f96805bf3" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007574 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d0314ea-cf72-4896-8b8d-a97c91b119d8" containerName="pruner" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.007584 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dbf4a5b-0892-4445-b9d0-972b3187c8ee" containerName="registry-server" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008118 4786 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008148 4786 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.008294 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008304 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.008314 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008321 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.008330 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008336 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.008351 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008357 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.008366 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008372 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.008384 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008390 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.008477 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008486 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008595 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008603 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008612 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008620 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008628 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008635 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008643 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 09 08:47:50 crc kubenswrapper[4786]: E1209 08:47:50.008829 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008838 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008868 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3" gracePeriod=15 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008931 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac" gracePeriod=15 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.009050 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4" gracePeriod=15 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008766 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b" gracePeriod=15 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.008930 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58" gracePeriod=15 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.009397 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.014661 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.014978 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.015190 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.015231 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.116786 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.117267 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.117328 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.117368 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.117417 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.117469 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.117488 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.117521 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.116930 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.117653 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.117678 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.218749 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.218817 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.218842 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.218879 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.218924 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.218924 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.219003 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.219015 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.219053 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.219053 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.246345 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.247838 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.248829 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3" exitCode=0 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.248867 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4" exitCode=0 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.248875 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac" exitCode=0 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.248883 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58" exitCode=2 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.248887 4786 scope.go:117] "RemoveContainer" containerID="ae97ff507787476580cc1d0a0dbfc31da7bdae9f1cb5be08f59deb9241b4700e" Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.251126 4786 generic.go:334] "Generic (PLEG): container finished" podID="11cf143d-97ba-47c9-ab66-df928f596a42" containerID="4356f032ad73376bd0c6a4748e7a6f793057cbecf14f2b9a9442738aa7c1ac16" exitCode=0 Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.251179 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"11cf143d-97ba-47c9-ab66-df928f596a42","Type":"ContainerDied","Data":"4356f032ad73376bd0c6a4748e7a6f793057cbecf14f2b9a9442738aa7c1ac16"} Dec 09 08:47:50 crc kubenswrapper[4786]: I1209 08:47:50.252554 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.262118 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.536491 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.538461 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.639681 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-kubelet-dir\") pod \"11cf143d-97ba-47c9-ab66-df928f596a42\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.639770 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11cf143d-97ba-47c9-ab66-df928f596a42-kube-api-access\") pod \"11cf143d-97ba-47c9-ab66-df928f596a42\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.639838 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-var-lock\") pod \"11cf143d-97ba-47c9-ab66-df928f596a42\" (UID: \"11cf143d-97ba-47c9-ab66-df928f596a42\") " Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.639857 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "11cf143d-97ba-47c9-ab66-df928f596a42" (UID: "11cf143d-97ba-47c9-ab66-df928f596a42"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.639949 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-var-lock" (OuterVolumeSpecName: "var-lock") pod "11cf143d-97ba-47c9-ab66-df928f596a42" (UID: "11cf143d-97ba-47c9-ab66-df928f596a42"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.640544 4786 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.640570 4786 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11cf143d-97ba-47c9-ab66-df928f596a42-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.648835 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11cf143d-97ba-47c9-ab66-df928f596a42-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "11cf143d-97ba-47c9-ab66-df928f596a42" (UID: "11cf143d-97ba-47c9-ab66-df928f596a42"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.741793 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/11cf143d-97ba-47c9-ab66-df928f596a42-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:51 crc kubenswrapper[4786]: E1209 08:47:51.751148 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:51 crc kubenswrapper[4786]: E1209 08:47:51.751698 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:51 crc kubenswrapper[4786]: E1209 08:47:51.752556 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:51 crc kubenswrapper[4786]: E1209 08:47:51.752871 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:51 crc kubenswrapper[4786]: E1209 08:47:51.753211 4786 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:51 crc kubenswrapper[4786]: I1209 08:47:51.753269 4786 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 09 08:47:51 crc kubenswrapper[4786]: E1209 08:47:51.753624 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="200ms" Dec 09 08:47:51 crc kubenswrapper[4786]: E1209 08:47:51.954918 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="400ms" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.270188 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"11cf143d-97ba-47c9-ab66-df928f596a42","Type":"ContainerDied","Data":"ffe1ea0fb1d3d3132a3e3c6f647a6b1972e7756f08d226cdec903231538fcfcd"} Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.270536 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ffe1ea0fb1d3d3132a3e3c6f647a6b1972e7756f08d226cdec903231538fcfcd" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.270283 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.285819 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:52 crc kubenswrapper[4786]: E1209 08:47:52.357194 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="800ms" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.956286 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.956961 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.957408 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.957847 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.959660 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.959693 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.959710 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.959805 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.959841 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:47:52 crc kubenswrapper[4786]: I1209 08:47:52.959882 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.060944 4786 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.061468 4786 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.061478 4786 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:47:53 crc kubenswrapper[4786]: E1209 08:47:53.159014 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="1.6s" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.197557 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.282020 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.283288 4786 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b" exitCode=0 Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.283342 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.283356 4786 scope.go:117] "RemoveContainer" containerID="4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.284014 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.284232 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.288850 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.289156 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.303800 4786 scope.go:117] "RemoveContainer" containerID="a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.320725 4786 scope.go:117] "RemoveContainer" containerID="f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.339966 4786 scope.go:117] "RemoveContainer" containerID="d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.356293 4786 scope.go:117] "RemoveContainer" containerID="56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.373471 4786 scope.go:117] "RemoveContainer" containerID="498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.406555 4786 scope.go:117] "RemoveContainer" containerID="4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3" Dec 09 08:47:53 crc kubenswrapper[4786]: E1209 08:47:53.407200 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\": container with ID starting with 4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3 not found: ID does not exist" containerID="4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.407252 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3"} err="failed to get container status \"4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\": rpc error: code = NotFound desc = could not find container \"4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3\": container with ID starting with 4f6f4abb61df9828cb9bec57c2f2fc4709a6cf12afc72467af5466723a4335f3 not found: ID does not exist" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.407298 4786 scope.go:117] "RemoveContainer" containerID="a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4" Dec 09 08:47:53 crc kubenswrapper[4786]: E1209 08:47:53.408067 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\": container with ID starting with a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4 not found: ID does not exist" containerID="a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.408088 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4"} err="failed to get container status \"a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\": rpc error: code = NotFound desc = could not find container \"a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4\": container with ID starting with a164888526a3ab84fbd409540bc54403fadd586a0583e323d5ca1b10c81cbfc4 not found: ID does not exist" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.408107 4786 scope.go:117] "RemoveContainer" containerID="f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac" Dec 09 08:47:53 crc kubenswrapper[4786]: E1209 08:47:53.408689 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\": container with ID starting with f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac not found: ID does not exist" containerID="f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.408737 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac"} err="failed to get container status \"f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\": rpc error: code = NotFound desc = could not find container \"f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac\": container with ID starting with f57fe092832b333e8fa6389429ad3fd571c3e0162f619fdd69efcb0b84e88bac not found: ID does not exist" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.408790 4786 scope.go:117] "RemoveContainer" containerID="d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58" Dec 09 08:47:53 crc kubenswrapper[4786]: E1209 08:47:53.409209 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\": container with ID starting with d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58 not found: ID does not exist" containerID="d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.409236 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58"} err="failed to get container status \"d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\": rpc error: code = NotFound desc = could not find container \"d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58\": container with ID starting with d7b3741b79ca6fb3fc12acaa290b9d7accb765f36a88f0b8fb6e6362c6596a58 not found: ID does not exist" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.409257 4786 scope.go:117] "RemoveContainer" containerID="56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b" Dec 09 08:47:53 crc kubenswrapper[4786]: E1209 08:47:53.409655 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\": container with ID starting with 56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b not found: ID does not exist" containerID="56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.409679 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b"} err="failed to get container status \"56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\": rpc error: code = NotFound desc = could not find container \"56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b\": container with ID starting with 56160ed128040dde583c3142616d9af9180427b9e32520d6330fd166e9c4207b not found: ID does not exist" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.409697 4786 scope.go:117] "RemoveContainer" containerID="498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341" Dec 09 08:47:53 crc kubenswrapper[4786]: E1209 08:47:53.409985 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\": container with ID starting with 498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341 not found: ID does not exist" containerID="498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341" Dec 09 08:47:53 crc kubenswrapper[4786]: I1209 08:47:53.410011 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341"} err="failed to get container status \"498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\": rpc error: code = NotFound desc = could not find container \"498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341\": container with ID starting with 498159dfdc89366661bfac4c0c796abfc9737159190cf3281bcb4882cec32341 not found: ID does not exist" Dec 09 08:47:54 crc kubenswrapper[4786]: E1209 08:47:54.760598 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="3.2s" Dec 09 08:47:55 crc kubenswrapper[4786]: E1209 08:47:55.059601 4786 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:55 crc kubenswrapper[4786]: I1209 08:47:55.060113 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:55 crc kubenswrapper[4786]: E1209 08:47:55.082223 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.245:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f7fc7cddadb1b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 08:47:55.081415451 +0000 UTC m=+240.965036677,LastTimestamp:2025-12-09 08:47:55.081415451 +0000 UTC m=+240.965036677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 08:47:55 crc kubenswrapper[4786]: I1209 08:47:55.190370 4786 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:55 crc kubenswrapper[4786]: I1209 08:47:55.190981 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:55 crc kubenswrapper[4786]: I1209 08:47:55.303299 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fa179a2db2a43f8b1a55ca0cf5416091752694edac3c3074adc020f3a8466fb1"} Dec 09 08:47:56 crc kubenswrapper[4786]: I1209 08:47:56.312004 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc"} Dec 09 08:47:56 crc kubenswrapper[4786]: E1209 08:47:56.313179 4786 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:56 crc kubenswrapper[4786]: I1209 08:47:56.313245 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:47:57 crc kubenswrapper[4786]: E1209 08:47:57.320109 4786 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.129.56.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:47:57 crc kubenswrapper[4786]: E1209 08:47:57.948243 4786 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.129.56.245:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187f7fc7cddadb1b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-09 08:47:55.081415451 +0000 UTC m=+240.965036677,LastTimestamp:2025-12-09 08:47:55.081415451 +0000 UTC m=+240.965036677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 09 08:47:57 crc kubenswrapper[4786]: E1209 08:47:57.962061 4786 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.129.56.245:6443: connect: connection refused" interval="6.4s" Dec 09 08:48:02 crc kubenswrapper[4786]: I1209 08:48:02.187765 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:02 crc kubenswrapper[4786]: I1209 08:48:02.189533 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:48:02 crc kubenswrapper[4786]: I1209 08:48:02.205034 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:02 crc kubenswrapper[4786]: I1209 08:48:02.205136 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:02 crc kubenswrapper[4786]: E1209 08:48:02.205743 4786 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:02 crc kubenswrapper[4786]: I1209 08:48:02.206233 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:02 crc kubenswrapper[4786]: I1209 08:48:02.355601 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"c1b713251cc82759a83757a2052d4463a9cdccf78a1eece58a46e38a1ee8fe3a"} Dec 09 08:48:03 crc kubenswrapper[4786]: I1209 08:48:03.364692 4786 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="c6f764d60e01e14911e6754e9a44b47ebaed5ea3eac937792dc03d56e30c1274" exitCode=0 Dec 09 08:48:03 crc kubenswrapper[4786]: I1209 08:48:03.364866 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"c6f764d60e01e14911e6754e9a44b47ebaed5ea3eac937792dc03d56e30c1274"} Dec 09 08:48:03 crc kubenswrapper[4786]: I1209 08:48:03.365267 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:03 crc kubenswrapper[4786]: I1209 08:48:03.365299 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:03 crc kubenswrapper[4786]: I1209 08:48:03.365875 4786 status_manager.go:851] "Failed to get status for pod" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" Dec 09 08:48:03 crc kubenswrapper[4786]: E1209 08:48:03.365973 4786 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.129.56.245:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:04 crc kubenswrapper[4786]: I1209 08:48:04.373364 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 08:48:04 crc kubenswrapper[4786]: I1209 08:48:04.373707 4786 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867" exitCode=1 Dec 09 08:48:04 crc kubenswrapper[4786]: I1209 08:48:04.373815 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867"} Dec 09 08:48:04 crc kubenswrapper[4786]: I1209 08:48:04.374594 4786 scope.go:117] "RemoveContainer" containerID="52ce8749c52af3d1b2b156b9adce27a80b026abdb2dc22308199f1e822ec9867" Dec 09 08:48:04 crc kubenswrapper[4786]: I1209 08:48:04.375441 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f1ce603a085c84ffefe523eb0416059f38f0a9523f7fa998bc302db04cc17e2"} Dec 09 08:48:05 crc kubenswrapper[4786]: I1209 08:48:05.364866 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:48:05 crc kubenswrapper[4786]: I1209 08:48:05.393040 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e65aaac76d4379a5298aa6e47f2a062dab456cd46f6e1639dd5e6f93f1a0d588"} Dec 09 08:48:06 crc kubenswrapper[4786]: I1209 08:48:06.401446 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"83a53c4f823fbc1efe952ecaaf3618ac9ed9a34b820a65c2775a6d4cc9f86c73"} Dec 09 08:48:06 crc kubenswrapper[4786]: I1209 08:48:06.404733 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 09 08:48:06 crc kubenswrapper[4786]: I1209 08:48:06.404810 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"522465be0cefc8f45f23bd39e310d03f8784c9c2a74939c8b819125bea3bdd66"} Dec 09 08:48:08 crc kubenswrapper[4786]: I1209 08:48:08.336837 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" podUID="b6302642-cdce-43da-b8fe-60bbcf3a4eaf" containerName="oauth-openshift" containerID="cri-o://396c1a5ca9530279841f74c25041b29da131c496ee02e3fdf6e160a54eef8d65" gracePeriod=15 Dec 09 08:48:08 crc kubenswrapper[4786]: I1209 08:48:08.423615 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b7ed4827ee9933ef692606e5872342843f209bd5b20fd010c2f2123e08dcc11b"} Dec 09 08:48:08 crc kubenswrapper[4786]: I1209 08:48:08.967085 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:48:08 crc kubenswrapper[4786]: I1209 08:48:08.976155 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:48:09 crc kubenswrapper[4786]: I1209 08:48:09.433567 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"58d9d23ccf0473269ed4b8f1b03a3ff51489e0dcf38ab021095e711451f3dcd8"} Dec 09 08:48:09 crc kubenswrapper[4786]: I1209 08:48:09.435630 4786 generic.go:334] "Generic (PLEG): container finished" podID="b6302642-cdce-43da-b8fe-60bbcf3a4eaf" containerID="396c1a5ca9530279841f74c25041b29da131c496ee02e3fdf6e160a54eef8d65" exitCode=0 Dec 09 08:48:09 crc kubenswrapper[4786]: I1209 08:48:09.435943 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" event={"ID":"b6302642-cdce-43da-b8fe-60bbcf3a4eaf","Type":"ContainerDied","Data":"396c1a5ca9530279841f74c25041b29da131c496ee02e3fdf6e160a54eef8d65"} Dec 09 08:48:09 crc kubenswrapper[4786]: I1209 08:48:09.436114 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.445786 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.445870 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.446115 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.458526 4786 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.466336 4786 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd790c5e-461b-4682-88a8-e915bb3558fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1f1ce603a085c84ffefe523eb0416059f38f0a9523f7fa998bc302db04cc17e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:48:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://83a53c4f823fbc1efe952ecaaf3618ac9ed9a34b820a65c2775a6d4cc9f86c73\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:48:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e65aaac76d4379a5298aa6e47f2a062dab456cd46f6e1639dd5e6f93f1a0d588\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:48:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d9d23ccf0473269ed4b8f1b03a3ff51489e0dcf38ab021095e711451f3dcd8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:48:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7ed4827ee9933ef692606e5872342843f209bd5b20fd010c2f2123e08dcc11b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-09T08:48:07Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.830415 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.836254 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2b1e811e-0e07-4738-b293-771aef1d4a32" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925389 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-error\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925456 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-login\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925484 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-router-certs\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925513 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-trusted-ca-bundle\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925537 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-session\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925588 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-ocp-branding-template\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925610 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-policies\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925632 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtw8z\" (UniqueName: \"kubernetes.io/projected/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-kube-api-access-rtw8z\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925650 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-service-ca\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925670 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-idp-0-file-data\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925696 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-dir\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925722 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-cliconfig\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925757 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-provider-selection\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.925780 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-serving-cert\") pod \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\" (UID: \"b6302642-cdce-43da-b8fe-60bbcf3a4eaf\") " Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.926970 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.927450 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.927643 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.927655 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.927717 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.933620 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.933655 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-kube-api-access-rtw8z" (OuterVolumeSpecName: "kube-api-access-rtw8z") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "kube-api-access-rtw8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.933955 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.934245 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.934555 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.934751 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.934884 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.935132 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:48:10 crc kubenswrapper[4786]: I1209 08:48:10.939755 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b6302642-cdce-43da-b8fe-60bbcf3a4eaf" (UID: "b6302642-cdce-43da-b8fe-60bbcf3a4eaf"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027120 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027162 4786 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027177 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtw8z\" (UniqueName: \"kubernetes.io/projected/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-kube-api-access-rtw8z\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027222 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027231 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027240 4786 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027248 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027258 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027268 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027278 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027287 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027295 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027304 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.027312 4786 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b6302642-cdce-43da-b8fe-60bbcf3a4eaf-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.450352 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" event={"ID":"b6302642-cdce-43da-b8fe-60bbcf3a4eaf","Type":"ContainerDied","Data":"2b118a9b851ed957462cc4dc06158674b2f966586a2d7ba049b559a35f2baaa5"} Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.450370 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-hk4xf" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.450625 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.450650 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:11 crc kubenswrapper[4786]: I1209 08:48:11.450469 4786 scope.go:117] "RemoveContainer" containerID="396c1a5ca9530279841f74c25041b29da131c496ee02e3fdf6e160a54eef8d65" Dec 09 08:48:12 crc kubenswrapper[4786]: I1209 08:48:12.206739 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:12 crc kubenswrapper[4786]: I1209 08:48:12.207159 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:12 crc kubenswrapper[4786]: I1209 08:48:12.213304 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:12 crc kubenswrapper[4786]: I1209 08:48:12.458903 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:12 crc kubenswrapper[4786]: I1209 08:48:12.458953 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:12 crc kubenswrapper[4786]: I1209 08:48:12.466069 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:12 crc kubenswrapper[4786]: I1209 08:48:12.471072 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2b1e811e-0e07-4738-b293-771aef1d4a32" Dec 09 08:48:13 crc kubenswrapper[4786]: I1209 08:48:13.464019 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:13 crc kubenswrapper[4786]: I1209 08:48:13.464057 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:14 crc kubenswrapper[4786]: I1209 08:48:14.472637 4786 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:14 crc kubenswrapper[4786]: I1209 08:48:14.472698 4786 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="cd790c5e-461b-4682-88a8-e915bb3558fb" Dec 09 08:48:15 crc kubenswrapper[4786]: I1209 08:48:15.198651 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="2b1e811e-0e07-4738-b293-771aef1d4a32" Dec 09 08:48:15 crc kubenswrapper[4786]: I1209 08:48:15.372124 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 09 08:48:21 crc kubenswrapper[4786]: I1209 08:48:21.581392 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 09 08:48:21 crc kubenswrapper[4786]: I1209 08:48:21.761692 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 09 08:48:21 crc kubenswrapper[4786]: I1209 08:48:21.776829 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 09 08:48:21 crc kubenswrapper[4786]: I1209 08:48:21.876801 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 09 08:48:22 crc kubenswrapper[4786]: I1209 08:48:22.079542 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 09 08:48:22 crc kubenswrapper[4786]: I1209 08:48:22.203072 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 09 08:48:22 crc kubenswrapper[4786]: I1209 08:48:22.615349 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 09 08:48:22 crc kubenswrapper[4786]: I1209 08:48:22.698674 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 09 08:48:22 crc kubenswrapper[4786]: I1209 08:48:22.764040 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.120723 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.135239 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.223405 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.329540 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.354957 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.469792 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.561674 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.570885 4786 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.595445 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.706197 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.936250 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.946210 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 09 08:48:23 crc kubenswrapper[4786]: I1209 08:48:23.961974 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.092341 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.123142 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.223115 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.440956 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.503488 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.619313 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.809001 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.822306 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.824629 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.848916 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.850137 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.850686 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.859302 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 09 08:48:24 crc kubenswrapper[4786]: I1209 08:48:24.918894 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.135578 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.144140 4786 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.148947 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-hk4xf"] Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.149045 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.153128 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.166168 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.16614879 podStartE2EDuration="15.16614879s" podCreationTimestamp="2025-12-09 08:48:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:48:25.164960631 +0000 UTC m=+271.048581857" watchObservedRunningTime="2025-12-09 08:48:25.16614879 +0000 UTC m=+271.049770016" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.168681 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.173110 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.195754 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6302642-cdce-43da-b8fe-60bbcf3a4eaf" path="/var/lib/kubelet/pods/b6302642-cdce-43da-b8fe-60bbcf3a4eaf/volumes" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.245240 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.373414 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.431032 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.452939 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.506353 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.519495 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.534297 4786 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.554369 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.586636 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.739686 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.826206 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.841132 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.865488 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.924945 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.950944 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 09 08:48:25 crc kubenswrapper[4786]: I1209 08:48:25.990785 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.035787 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.067790 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.157834 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.175274 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.239233 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.245331 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.368944 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.379713 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.418554 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.441276 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.619760 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.661014 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.702246 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.751187 4786 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.795583 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.796328 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.831801 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.938135 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.984714 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 09 08:48:26 crc kubenswrapper[4786]: I1209 08:48:26.993174 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.008994 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.087129 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.111817 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.177489 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.249835 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.276690 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.353353 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.398070 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.562491 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.575243 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.580246 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.619576 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.620616 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.628855 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.653441 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.683406 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 09 08:48:27 crc kubenswrapper[4786]: I1209 08:48:27.689763 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.019552 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.123828 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.143456 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.152819 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.198064 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.274145 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.277878 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.298692 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.312010 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.325841 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.338189 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.369672 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.428114 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.567443 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.571357 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.642051 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.692834 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.733628 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.776402 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.791893 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.852638 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.856833 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.872350 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.925412 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.935555 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 09 08:48:28 crc kubenswrapper[4786]: I1209 08:48:28.979024 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.199016 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.252697 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.282521 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.331968 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.333484 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.457871 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.561131 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.608626 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.740172 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.776560 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.853646 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.857812 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.861640 4786 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.867844 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 09 08:48:29 crc kubenswrapper[4786]: I1209 08:48:29.967558 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.045817 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.110107 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.126304 4786 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.128948 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.328088 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.328124 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.396919 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.400726 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.428040 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.491405 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.518491 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.607719 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.780298 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.870025 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.973474 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.993641 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 09 08:48:30 crc kubenswrapper[4786]: I1209 08:48:30.994735 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.183890 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.283800 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.317959 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.331698 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.414981 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.501274 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.508942 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.511199 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.662629 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.763636 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.795596 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.801139 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.814661 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.853577 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.880826 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.928566 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.947181 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 09 08:48:31 crc kubenswrapper[4786]: I1209 08:48:31.980091 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.044729 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.065841 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.189031 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.347755 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.455605 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.747037 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.787443 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.843470 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.913462 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.920715 4786 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.921022 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc" gracePeriod=5 Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.930807 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 09 08:48:32 crc kubenswrapper[4786]: I1209 08:48:32.968036 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.084914 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.157482 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.159998 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.166253 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.166635 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.276143 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.298610 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.707602 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.802049 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.855781 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 09 08:48:33 crc kubenswrapper[4786]: I1209 08:48:33.954277 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.010942 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.014666 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.049838 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.097335 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.310007 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.316196 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.412546 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.447830 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.574704 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.582038 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.599178 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.617608 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.635694 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.685203 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.821289 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.874652 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.967989 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.971756 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 08:48:34 crc kubenswrapper[4786]: I1209 08:48:34.975261 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.031455 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.207956 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.239526 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.287896 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.315053 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.450514 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.650669 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.705865 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.882345 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 09 08:48:35 crc kubenswrapper[4786]: I1209 08:48:35.932376 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.086993 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.146866 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.191222 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.257007 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.272650 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.498560 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.751527 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.779939 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.805005 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-857d94f549-sdd5j"] Dec 09 08:48:36 crc kubenswrapper[4786]: E1209 08:48:36.805333 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" containerName="installer" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.805357 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" containerName="installer" Dec 09 08:48:36 crc kubenswrapper[4786]: E1209 08:48:36.805376 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.805383 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 08:48:36 crc kubenswrapper[4786]: E1209 08:48:36.805399 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6302642-cdce-43da-b8fe-60bbcf3a4eaf" containerName="oauth-openshift" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.805407 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6302642-cdce-43da-b8fe-60bbcf3a4eaf" containerName="oauth-openshift" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.805613 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="11cf143d-97ba-47c9-ab66-df928f596a42" containerName="installer" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.805631 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6302642-cdce-43da-b8fe-60bbcf3a4eaf" containerName="oauth-openshift" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.805650 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.806232 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.808345 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.808527 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.809729 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.810093 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.810275 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.810391 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.810497 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.810571 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.810700 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.810776 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.811655 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.813816 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.835216 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.841217 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-857d94f549-sdd5j"] Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.854991 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.855568 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.890801 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.891480 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcsdb\" (UniqueName: \"kubernetes.io/projected/ec625a75-4b83-4e27-866c-400c8d5b4efd-kube-api-access-lcsdb\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.891565 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-template-error\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.891611 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-template-login\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.891634 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.891664 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.891770 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec625a75-4b83-4e27-866c-400c8d5b4efd-audit-dir\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.892084 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-session\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.892141 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-audit-policies\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.892170 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.892289 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-router-certs\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.892334 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.892367 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.892408 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-service-ca\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.892485 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.993741 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcsdb\" (UniqueName: \"kubernetes.io/projected/ec625a75-4b83-4e27-866c-400c8d5b4efd-kube-api-access-lcsdb\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.993806 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-template-error\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.993832 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-template-login\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.993849 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.993926 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.993961 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec625a75-4b83-4e27-866c-400c8d5b4efd-audit-dir\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.993999 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-session\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.994033 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-audit-policies\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.994057 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.994084 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-router-certs\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.994107 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.994134 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.994167 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-service-ca\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.994201 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.994197 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ec625a75-4b83-4e27-866c-400c8d5b4efd-audit-dir\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.995053 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.996632 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-service-ca\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.996899 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:36 crc kubenswrapper[4786]: I1209 08:48:36.998533 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ec625a75-4b83-4e27-866c-400c8d5b4efd-audit-policies\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.002736 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-template-error\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.002821 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.003414 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-router-certs\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.004705 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.004851 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-user-template-login\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.004925 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.005166 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.009024 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ec625a75-4b83-4e27-866c-400c8d5b4efd-v4-0-config-system-session\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.013838 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcsdb\" (UniqueName: \"kubernetes.io/projected/ec625a75-4b83-4e27-866c-400c8d5b4efd-kube-api-access-lcsdb\") pod \"oauth-openshift-857d94f549-sdd5j\" (UID: \"ec625a75-4b83-4e27-866c-400c8d5b4efd\") " pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.057127 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.134377 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.235562 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.574533 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-857d94f549-sdd5j"] Dec 09 08:48:37 crc kubenswrapper[4786]: I1209 08:48:37.607457 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" event={"ID":"ec625a75-4b83-4e27-866c-400c8d5b4efd","Type":"ContainerStarted","Data":"1cb7b314f8f0f089d512c5344fd0f9cc32891c08e5c5f818b917a899f817a081"} Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.227554 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.249908 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.279854 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.404893 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.498504 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.498586 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.614578 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.614650 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.614689 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.614744 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.614788 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.615003 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.615092 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.615182 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.615217 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.615715 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.615826 4786 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc" exitCode=137 Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.615986 4786 scope.go:117] "RemoveContainer" containerID="a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.616079 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.619235 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" event={"ID":"ec625a75-4b83-4e27-866c-400c8d5b4efd","Type":"ContainerStarted","Data":"c8502a4f8de6a8f63cfc3ae4ab96f9a50a187fd444907b4fcc87218b5ebf2914"} Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.620852 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.626945 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.630545 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.645394 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-857d94f549-sdd5j" podStartSLOduration=55.645375646 podStartE2EDuration="55.645375646s" podCreationTimestamp="2025-12-09 08:47:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:48:38.644327701 +0000 UTC m=+284.527948927" watchObservedRunningTime="2025-12-09 08:48:38.645375646 +0000 UTC m=+284.528996862" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.689780 4786 scope.go:117] "RemoveContainer" containerID="a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc" Dec 09 08:48:38 crc kubenswrapper[4786]: E1209 08:48:38.690690 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc\": container with ID starting with a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc not found: ID does not exist" containerID="a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.690750 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc"} err="failed to get container status \"a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc\": rpc error: code = NotFound desc = could not find container \"a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc\": container with ID starting with a5ac2fcede558f2990ce8303bba6658325f2311f8a1bbc2d660a7e902efd25fc not found: ID does not exist" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.716879 4786 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.716926 4786 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.716936 4786 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.716946 4786 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:38 crc kubenswrapper[4786]: I1209 08:48:38.716955 4786 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 09 08:48:39 crc kubenswrapper[4786]: I1209 08:48:39.198543 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 09 08:48:39 crc kubenswrapper[4786]: I1209 08:48:39.252801 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 09 08:48:39 crc kubenswrapper[4786]: I1209 08:48:39.610203 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 09 08:48:54 crc kubenswrapper[4786]: I1209 08:48:54.881144 4786 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 09 08:48:56 crc kubenswrapper[4786]: I1209 08:48:56.735291 4786 generic.go:334] "Generic (PLEG): container finished" podID="b128968a-caa6-46be-be15-79971a310e5c" containerID="8b284375f145834322161af06139a89e6edbd55a682391331b64b5ffd97b6e9a" exitCode=0 Dec 09 08:48:56 crc kubenswrapper[4786]: I1209 08:48:56.735411 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" event={"ID":"b128968a-caa6-46be-be15-79971a310e5c","Type":"ContainerDied","Data":"8b284375f145834322161af06139a89e6edbd55a682391331b64b5ffd97b6e9a"} Dec 09 08:48:56 crc kubenswrapper[4786]: I1209 08:48:56.736388 4786 scope.go:117] "RemoveContainer" containerID="8b284375f145834322161af06139a89e6edbd55a682391331b64b5ffd97b6e9a" Dec 09 08:48:57 crc kubenswrapper[4786]: I1209 08:48:57.749535 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" event={"ID":"b128968a-caa6-46be-be15-79971a310e5c","Type":"ContainerStarted","Data":"a0340fc378e6c3a380597a7b45ad1e5bca554dce19e16f1fd545cefc886d235f"} Dec 09 08:48:57 crc kubenswrapper[4786]: I1209 08:48:57.750532 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:48:57 crc kubenswrapper[4786]: I1209 08:48:57.753312 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.332701 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dklgh"] Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.333968 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" podUID="138bf803-28c5-4a55-a0e4-48b3b2069673" containerName="controller-manager" containerID="cri-o://fb2249068f4f012418db748dfca5b35346a7eb6b2d90edb87654bce7a0fbcfe5" gracePeriod=30 Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.467326 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4"] Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.467627 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" podUID="d45771b0-7251-4e48-83dc-49322a76677c" containerName="route-controller-manager" containerID="cri-o://7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d" gracePeriod=30 Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.818552 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.821237 4786 generic.go:334] "Generic (PLEG): container finished" podID="d45771b0-7251-4e48-83dc-49322a76677c" containerID="7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d" exitCode=0 Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.821331 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" event={"ID":"d45771b0-7251-4e48-83dc-49322a76677c","Type":"ContainerDied","Data":"7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d"} Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.821383 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" event={"ID":"d45771b0-7251-4e48-83dc-49322a76677c","Type":"ContainerDied","Data":"237258b77eccb606d57ccddf65914afa8c3c96a5419530fb9f8dde9d10bf8830"} Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.821407 4786 scope.go:117] "RemoveContainer" containerID="7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d" Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.821560 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4" Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.825089 4786 generic.go:334] "Generic (PLEG): container finished" podID="138bf803-28c5-4a55-a0e4-48b3b2069673" containerID="fb2249068f4f012418db748dfca5b35346a7eb6b2d90edb87654bce7a0fbcfe5" exitCode=0 Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.825153 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" event={"ID":"138bf803-28c5-4a55-a0e4-48b3b2069673","Type":"ContainerDied","Data":"fb2249068f4f012418db748dfca5b35346a7eb6b2d90edb87654bce7a0fbcfe5"} Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.841747 4786 scope.go:117] "RemoveContainer" containerID="7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d" Dec 09 08:49:09 crc kubenswrapper[4786]: E1209 08:49:09.842851 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d\": container with ID starting with 7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d not found: ID does not exist" containerID="7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d" Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.842917 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d"} err="failed to get container status \"7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d\": rpc error: code = NotFound desc = could not find container \"7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d\": container with ID starting with 7cdd09304ca3c4db9802c3c7d74719893a168749bea9674c5c7589574940050d not found: ID does not exist" Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.962638 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-config\") pod \"d45771b0-7251-4e48-83dc-49322a76677c\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.962756 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-client-ca\") pod \"d45771b0-7251-4e48-83dc-49322a76677c\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.962788 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkzvx\" (UniqueName: \"kubernetes.io/projected/d45771b0-7251-4e48-83dc-49322a76677c-kube-api-access-fkzvx\") pod \"d45771b0-7251-4e48-83dc-49322a76677c\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.962866 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45771b0-7251-4e48-83dc-49322a76677c-serving-cert\") pod \"d45771b0-7251-4e48-83dc-49322a76677c\" (UID: \"d45771b0-7251-4e48-83dc-49322a76677c\") " Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.963883 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-config" (OuterVolumeSpecName: "config") pod "d45771b0-7251-4e48-83dc-49322a76677c" (UID: "d45771b0-7251-4e48-83dc-49322a76677c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.964079 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-client-ca" (OuterVolumeSpecName: "client-ca") pod "d45771b0-7251-4e48-83dc-49322a76677c" (UID: "d45771b0-7251-4e48-83dc-49322a76677c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.970842 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45771b0-7251-4e48-83dc-49322a76677c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45771b0-7251-4e48-83dc-49322a76677c" (UID: "d45771b0-7251-4e48-83dc-49322a76677c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:49:09 crc kubenswrapper[4786]: I1209 08:49:09.981651 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45771b0-7251-4e48-83dc-49322a76677c-kube-api-access-fkzvx" (OuterVolumeSpecName: "kube-api-access-fkzvx") pod "d45771b0-7251-4e48-83dc-49322a76677c" (UID: "d45771b0-7251-4e48-83dc-49322a76677c"). InnerVolumeSpecName "kube-api-access-fkzvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.064117 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45771b0-7251-4e48-83dc-49322a76677c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.064171 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.064180 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d45771b0-7251-4e48-83dc-49322a76677c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.064193 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkzvx\" (UniqueName: \"kubernetes.io/projected/d45771b0-7251-4e48-83dc-49322a76677c-kube-api-access-fkzvx\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.157839 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4"] Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.161930 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-crld4"] Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.163291 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.267033 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-client-ca\") pod \"138bf803-28c5-4a55-a0e4-48b3b2069673\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.267369 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-config\") pod \"138bf803-28c5-4a55-a0e4-48b3b2069673\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.267572 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpvp4\" (UniqueName: \"kubernetes.io/projected/138bf803-28c5-4a55-a0e4-48b3b2069673-kube-api-access-lpvp4\") pod \"138bf803-28c5-4a55-a0e4-48b3b2069673\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.267690 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138bf803-28c5-4a55-a0e4-48b3b2069673-serving-cert\") pod \"138bf803-28c5-4a55-a0e4-48b3b2069673\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.267788 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-proxy-ca-bundles\") pod \"138bf803-28c5-4a55-a0e4-48b3b2069673\" (UID: \"138bf803-28c5-4a55-a0e4-48b3b2069673\") " Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.268996 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "138bf803-28c5-4a55-a0e4-48b3b2069673" (UID: "138bf803-28c5-4a55-a0e4-48b3b2069673"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.269123 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-config" (OuterVolumeSpecName: "config") pod "138bf803-28c5-4a55-a0e4-48b3b2069673" (UID: "138bf803-28c5-4a55-a0e4-48b3b2069673"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.269286 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-client-ca" (OuterVolumeSpecName: "client-ca") pod "138bf803-28c5-4a55-a0e4-48b3b2069673" (UID: "138bf803-28c5-4a55-a0e4-48b3b2069673"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.271471 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138bf803-28c5-4a55-a0e4-48b3b2069673-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "138bf803-28c5-4a55-a0e4-48b3b2069673" (UID: "138bf803-28c5-4a55-a0e4-48b3b2069673"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.272178 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/138bf803-28c5-4a55-a0e4-48b3b2069673-kube-api-access-lpvp4" (OuterVolumeSpecName: "kube-api-access-lpvp4") pod "138bf803-28c5-4a55-a0e4-48b3b2069673" (UID: "138bf803-28c5-4a55-a0e4-48b3b2069673"). InnerVolumeSpecName "kube-api-access-lpvp4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.370080 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpvp4\" (UniqueName: \"kubernetes.io/projected/138bf803-28c5-4a55-a0e4-48b3b2069673-kube-api-access-lpvp4\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.370170 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/138bf803-28c5-4a55-a0e4-48b3b2069673-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.370198 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.370221 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.370245 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/138bf803-28c5-4a55-a0e4-48b3b2069673-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.413200 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp"] Dec 09 08:49:10 crc kubenswrapper[4786]: E1209 08:49:10.413581 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138bf803-28c5-4a55-a0e4-48b3b2069673" containerName="controller-manager" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.413610 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="138bf803-28c5-4a55-a0e4-48b3b2069673" containerName="controller-manager" Dec 09 08:49:10 crc kubenswrapper[4786]: E1209 08:49:10.413628 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d45771b0-7251-4e48-83dc-49322a76677c" containerName="route-controller-manager" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.413637 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d45771b0-7251-4e48-83dc-49322a76677c" containerName="route-controller-manager" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.413770 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d45771b0-7251-4e48-83dc-49322a76677c" containerName="route-controller-manager" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.413803 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="138bf803-28c5-4a55-a0e4-48b3b2069673" containerName="controller-manager" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.414319 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.424873 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp"] Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.471360 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-config\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.471457 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96824228-c200-421d-a9b2-ae879f3f5674-serving-cert\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.471487 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-proxy-ca-bundles\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.471573 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-client-ca\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.471606 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4hpg\" (UniqueName: \"kubernetes.io/projected/96824228-c200-421d-a9b2-ae879f3f5674-kube-api-access-r4hpg\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.574606 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-client-ca\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.575322 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4hpg\" (UniqueName: \"kubernetes.io/projected/96824228-c200-421d-a9b2-ae879f3f5674-kube-api-access-r4hpg\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.575389 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-config\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.575499 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96824228-c200-421d-a9b2-ae879f3f5674-serving-cert\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.575544 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-proxy-ca-bundles\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.576088 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-client-ca\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.577250 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-config\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.578486 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-proxy-ca-bundles\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.584269 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96824228-c200-421d-a9b2-ae879f3f5674-serving-cert\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.595356 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4hpg\" (UniqueName: \"kubernetes.io/projected/96824228-c200-421d-a9b2-ae879f3f5674-kube-api-access-r4hpg\") pod \"controller-manager-64f4b76bd8-j4cgp\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.733869 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.833861 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" event={"ID":"138bf803-28c5-4a55-a0e4-48b3b2069673","Type":"ContainerDied","Data":"28d8beed1bf7be28f107b8e3c9b7e8a0a88547cd8e98e5abe3dc03ed9fc6765c"} Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.833926 4786 scope.go:117] "RemoveContainer" containerID="fb2249068f4f012418db748dfca5b35346a7eb6b2d90edb87654bce7a0fbcfe5" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.833938 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-dklgh" Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.870657 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dklgh"] Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.873613 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-dklgh"] Dec 09 08:49:10 crc kubenswrapper[4786]: I1209 08:49:10.952089 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp"] Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.195628 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="138bf803-28c5-4a55-a0e4-48b3b2069673" path="/var/lib/kubelet/pods/138bf803-28c5-4a55-a0e4-48b3b2069673/volumes" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.196862 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45771b0-7251-4e48-83dc-49322a76677c" path="/var/lib/kubelet/pods/d45771b0-7251-4e48-83dc-49322a76677c/volumes" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.424316 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs"] Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.426080 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.429256 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.429291 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.429758 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.429793 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.429944 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.433034 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.436467 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs"] Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.487614 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-serving-cert\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.487693 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj899\" (UniqueName: \"kubernetes.io/projected/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-kube-api-access-kj899\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.487740 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-client-ca\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.488150 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-config\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.590083 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-serving-cert\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.590186 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj899\" (UniqueName: \"kubernetes.io/projected/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-kube-api-access-kj899\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.590220 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-client-ca\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.590284 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-config\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.591375 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-client-ca\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.591819 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-config\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.597703 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-serving-cert\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.611484 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj899\" (UniqueName: \"kubernetes.io/projected/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-kube-api-access-kj899\") pod \"route-controller-manager-c59b74f88-92xxs\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.742917 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.847029 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" event={"ID":"96824228-c200-421d-a9b2-ae879f3f5674","Type":"ContainerStarted","Data":"927e85f42d3bfba8c4770f1b79ea8f4301b7bb83db5632825ea9a88c4bf0ccc2"} Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.848438 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" event={"ID":"96824228-c200-421d-a9b2-ae879f3f5674","Type":"ContainerStarted","Data":"4fb2bf7d44fda1949541ffd16cf435037dd4ba81385e6520073725a10a7f7cf7"} Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.848509 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.853136 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:11 crc kubenswrapper[4786]: I1209 08:49:11.863970 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" podStartSLOduration=2.863951889 podStartE2EDuration="2.863951889s" podCreationTimestamp="2025-12-09 08:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:49:11.863020256 +0000 UTC m=+317.746641492" watchObservedRunningTime="2025-12-09 08:49:11.863951889 +0000 UTC m=+317.747573115" Dec 09 08:49:12 crc kubenswrapper[4786]: I1209 08:49:12.014037 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp"] Dec 09 08:49:12 crc kubenswrapper[4786]: I1209 08:49:12.033323 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs"] Dec 09 08:49:12 crc kubenswrapper[4786]: I1209 08:49:12.197034 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs"] Dec 09 08:49:12 crc kubenswrapper[4786]: I1209 08:49:12.855204 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" event={"ID":"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff","Type":"ContainerStarted","Data":"5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d"} Dec 09 08:49:12 crc kubenswrapper[4786]: I1209 08:49:12.855807 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:12 crc kubenswrapper[4786]: I1209 08:49:12.855840 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" event={"ID":"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff","Type":"ContainerStarted","Data":"2334aacff4052e1adda482c8e828453c782061a8cf64067ffa215181d34ef8b0"} Dec 09 08:49:12 crc kubenswrapper[4786]: I1209 08:49:12.856099 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" podUID="e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" containerName="route-controller-manager" containerID="cri-o://5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d" gracePeriod=30 Dec 09 08:49:12 crc kubenswrapper[4786]: I1209 08:49:12.865163 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:12 crc kubenswrapper[4786]: I1209 08:49:12.883550 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" podStartSLOduration=3.883525353 podStartE2EDuration="3.883525353s" podCreationTimestamp="2025-12-09 08:49:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:49:12.882696063 +0000 UTC m=+318.766317289" watchObservedRunningTime="2025-12-09 08:49:12.883525353 +0000 UTC m=+318.767146579" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.254566 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.298292 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc"] Dec 09 08:49:13 crc kubenswrapper[4786]: E1209 08:49:13.298974 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" containerName="route-controller-manager" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.299131 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" containerName="route-controller-manager" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.299379 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" containerName="route-controller-manager" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.300114 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.305925 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc"] Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.320023 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-serving-cert\") pod \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.320409 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-config\") pod \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.320677 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj899\" (UniqueName: \"kubernetes.io/projected/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-kube-api-access-kj899\") pod \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.320815 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-client-ca\") pod \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\" (UID: \"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff\") " Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.321323 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-config" (OuterVolumeSpecName: "config") pod "e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" (UID: "e71f746c-32e3-4c8a-b768-b8d1f68ca8ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.321451 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-client-ca" (OuterVolumeSpecName: "client-ca") pod "e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" (UID: "e71f746c-32e3-4c8a-b768-b8d1f68ca8ff"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.332516 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-kube-api-access-kj899" (OuterVolumeSpecName: "kube-api-access-kj899") pod "e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" (UID: "e71f746c-32e3-4c8a-b768-b8d1f68ca8ff"). InnerVolumeSpecName "kube-api-access-kj899". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.332525 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" (UID: "e71f746c-32e3-4c8a-b768-b8d1f68ca8ff"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.423081 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-client-ca\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.423155 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5t9j\" (UniqueName: \"kubernetes.io/projected/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-kube-api-access-m5t9j\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.423315 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-serving-cert\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.423368 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-config\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.423408 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.423441 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj899\" (UniqueName: \"kubernetes.io/projected/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-kube-api-access-kj899\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.423455 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.423465 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.525134 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-serving-cert\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.525611 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-config\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.525669 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-client-ca\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.525700 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5t9j\" (UniqueName: \"kubernetes.io/projected/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-kube-api-access-m5t9j\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.527329 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-client-ca\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.527816 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-config\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.532674 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-serving-cert\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.545726 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5t9j\" (UniqueName: \"kubernetes.io/projected/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-kube-api-access-m5t9j\") pod \"route-controller-manager-657648d9fc-wsngc\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.615591 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.825671 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc"] Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.866353 4786 generic.go:334] "Generic (PLEG): container finished" podID="e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" containerID="5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d" exitCode=0 Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.866441 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.866525 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" event={"ID":"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff","Type":"ContainerDied","Data":"5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d"} Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.866576 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs" event={"ID":"e71f746c-32e3-4c8a-b768-b8d1f68ca8ff","Type":"ContainerDied","Data":"2334aacff4052e1adda482c8e828453c782061a8cf64067ffa215181d34ef8b0"} Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.866599 4786 scope.go:117] "RemoveContainer" containerID="5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.871121 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" event={"ID":"2cb9b960-1ba7-4b46-a6d2-0196186fbac2","Type":"ContainerStarted","Data":"3dc6e27d3af35cba5207fc37be1758aaea15b4aca669af352444d0f86831d2ae"} Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.871339 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" podUID="96824228-c200-421d-a9b2-ae879f3f5674" containerName="controller-manager" containerID="cri-o://927e85f42d3bfba8c4770f1b79ea8f4301b7bb83db5632825ea9a88c4bf0ccc2" gracePeriod=30 Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.902464 4786 scope.go:117] "RemoveContainer" containerID="5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d" Dec 09 08:49:13 crc kubenswrapper[4786]: E1209 08:49:13.903087 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d\": container with ID starting with 5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d not found: ID does not exist" containerID="5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.903145 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d"} err="failed to get container status \"5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d\": rpc error: code = NotFound desc = could not find container \"5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d\": container with ID starting with 5c1da3aaea2d686058273307731a7b9f258c8382fd892f7383b50879e5e5cd9d not found: ID does not exist" Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.931693 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs"] Dec 09 08:49:13 crc kubenswrapper[4786]: I1209 08:49:13.934773 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c59b74f88-92xxs"] Dec 09 08:49:14 crc kubenswrapper[4786]: I1209 08:49:14.882946 4786 generic.go:334] "Generic (PLEG): container finished" podID="96824228-c200-421d-a9b2-ae879f3f5674" containerID="927e85f42d3bfba8c4770f1b79ea8f4301b7bb83db5632825ea9a88c4bf0ccc2" exitCode=0 Dec 09 08:49:14 crc kubenswrapper[4786]: I1209 08:49:14.883096 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" event={"ID":"96824228-c200-421d-a9b2-ae879f3f5674","Type":"ContainerDied","Data":"927e85f42d3bfba8c4770f1b79ea8f4301b7bb83db5632825ea9a88c4bf0ccc2"} Dec 09 08:49:15 crc kubenswrapper[4786]: I1209 08:49:15.242016 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e71f746c-32e3-4c8a-b768-b8d1f68ca8ff" path="/var/lib/kubelet/pods/e71f746c-32e3-4c8a-b768-b8d1f68ca8ff/volumes" Dec 09 08:49:15 crc kubenswrapper[4786]: I1209 08:49:15.898368 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" event={"ID":"2cb9b960-1ba7-4b46-a6d2-0196186fbac2","Type":"ContainerStarted","Data":"6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992"} Dec 09 08:49:15 crc kubenswrapper[4786]: I1209 08:49:15.900683 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:15 crc kubenswrapper[4786]: I1209 08:49:15.913701 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:15 crc kubenswrapper[4786]: I1209 08:49:15.920531 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" podStartSLOduration=3.92049938 podStartE2EDuration="3.92049938s" podCreationTimestamp="2025-12-09 08:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:49:15.919567027 +0000 UTC m=+321.803188303" watchObservedRunningTime="2025-12-09 08:49:15.92049938 +0000 UTC m=+321.804120626" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.015378 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.050726 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5f784d6689-qxpvh"] Dec 09 08:49:16 crc kubenswrapper[4786]: E1209 08:49:16.051031 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96824228-c200-421d-a9b2-ae879f3f5674" containerName="controller-manager" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.051050 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="96824228-c200-421d-a9b2-ae879f3f5674" containerName="controller-manager" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.051172 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="96824228-c200-421d-a9b2-ae879f3f5674" containerName="controller-manager" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.051690 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.063306 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-proxy-ca-bundles\") pod \"96824228-c200-421d-a9b2-ae879f3f5674\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.063365 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-config\") pod \"96824228-c200-421d-a9b2-ae879f3f5674\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.063409 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4hpg\" (UniqueName: \"kubernetes.io/projected/96824228-c200-421d-a9b2-ae879f3f5674-kube-api-access-r4hpg\") pod \"96824228-c200-421d-a9b2-ae879f3f5674\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.063484 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96824228-c200-421d-a9b2-ae879f3f5674-serving-cert\") pod \"96824228-c200-421d-a9b2-ae879f3f5674\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.063789 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-client-ca\") pod \"96824228-c200-421d-a9b2-ae879f3f5674\" (UID: \"96824228-c200-421d-a9b2-ae879f3f5674\") " Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.066395 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "96824228-c200-421d-a9b2-ae879f3f5674" (UID: "96824228-c200-421d-a9b2-ae879f3f5674"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.072347 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f784d6689-qxpvh"] Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.073195 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-config" (OuterVolumeSpecName: "config") pod "96824228-c200-421d-a9b2-ae879f3f5674" (UID: "96824228-c200-421d-a9b2-ae879f3f5674"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.077916 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-client-ca" (OuterVolumeSpecName: "client-ca") pod "96824228-c200-421d-a9b2-ae879f3f5674" (UID: "96824228-c200-421d-a9b2-ae879f3f5674"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.078177 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96824228-c200-421d-a9b2-ae879f3f5674-kube-api-access-r4hpg" (OuterVolumeSpecName: "kube-api-access-r4hpg") pod "96824228-c200-421d-a9b2-ae879f3f5674" (UID: "96824228-c200-421d-a9b2-ae879f3f5674"). InnerVolumeSpecName "kube-api-access-r4hpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.078398 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96824228-c200-421d-a9b2-ae879f3f5674-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "96824228-c200-421d-a9b2-ae879f3f5674" (UID: "96824228-c200-421d-a9b2-ae879f3f5674"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.165483 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-client-ca\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.165587 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a288121-94b5-4438-a0d0-c416914c161c-serving-cert\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.165622 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-config\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.165651 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-proxy-ca-bundles\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.165735 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqrn4\" (UniqueName: \"kubernetes.io/projected/7a288121-94b5-4438-a0d0-c416914c161c-kube-api-access-dqrn4\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.165841 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.165969 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.165983 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4hpg\" (UniqueName: \"kubernetes.io/projected/96824228-c200-421d-a9b2-ae879f3f5674-kube-api-access-r4hpg\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.165996 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96824228-c200-421d-a9b2-ae879f3f5674-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.166032 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96824228-c200-421d-a9b2-ae879f3f5674-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.268293 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-config\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.268361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-proxy-ca-bundles\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.268899 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqrn4\" (UniqueName: \"kubernetes.io/projected/7a288121-94b5-4438-a0d0-c416914c161c-kube-api-access-dqrn4\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.269009 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-client-ca\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.269036 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a288121-94b5-4438-a0d0-c416914c161c-serving-cert\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.271085 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-config\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.271728 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-proxy-ca-bundles\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.273059 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-client-ca\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.282445 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a288121-94b5-4438-a0d0-c416914c161c-serving-cert\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.287143 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqrn4\" (UniqueName: \"kubernetes.io/projected/7a288121-94b5-4438-a0d0-c416914c161c-kube-api-access-dqrn4\") pod \"controller-manager-5f784d6689-qxpvh\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.371305 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.549048 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5f784d6689-qxpvh"] Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.906729 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" event={"ID":"96824228-c200-421d-a9b2-ae879f3f5674","Type":"ContainerDied","Data":"4fb2bf7d44fda1949541ffd16cf435037dd4ba81385e6520073725a10a7f7cf7"} Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.906752 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.907177 4786 scope.go:117] "RemoveContainer" containerID="927e85f42d3bfba8c4770f1b79ea8f4301b7bb83db5632825ea9a88c4bf0ccc2" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.908289 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" event={"ID":"7a288121-94b5-4438-a0d0-c416914c161c","Type":"ContainerStarted","Data":"98519755d0b4bd613f475bd8979b255db1f6a0689843253e17ecaf482c367b74"} Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.908341 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" event={"ID":"7a288121-94b5-4438-a0d0-c416914c161c","Type":"ContainerStarted","Data":"e568cc0d109a6bcc750dd081bcd0a6e5b0389d6c59b6c364c12f48d89afbe494"} Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.909026 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.917813 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.929189 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" podStartSLOduration=4.929164376 podStartE2EDuration="4.929164376s" podCreationTimestamp="2025-12-09 08:49:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:49:16.927449454 +0000 UTC m=+322.811070680" watchObservedRunningTime="2025-12-09 08:49:16.929164376 +0000 UTC m=+322.812785612" Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.941480 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp"] Dec 09 08:49:16 crc kubenswrapper[4786]: I1209 08:49:16.945185 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-64f4b76bd8-j4cgp"] Dec 09 08:49:17 crc kubenswrapper[4786]: I1209 08:49:17.194533 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96824228-c200-421d-a9b2-ae879f3f5674" path="/var/lib/kubelet/pods/96824228-c200-421d-a9b2-ae879f3f5674/volumes" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.734092 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hhhsd"] Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.736574 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.746183 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hhhsd"] Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.848649 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28615161-74b3-41e8-b457-b23588b3af6d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.848707 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28615161-74b3-41e8-b457-b23588b3af6d-bound-sa-token\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.848864 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28615161-74b3-41e8-b457-b23588b3af6d-registry-tls\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.848897 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28615161-74b3-41e8-b457-b23588b3af6d-registry-certificates\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.848962 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28615161-74b3-41e8-b457-b23588b3af6d-trusted-ca\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.849095 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.849125 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28615161-74b3-41e8-b457-b23588b3af6d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.849165 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpl7q\" (UniqueName: \"kubernetes.io/projected/28615161-74b3-41e8-b457-b23588b3af6d-kube-api-access-gpl7q\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.875380 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.950535 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28615161-74b3-41e8-b457-b23588b3af6d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.950790 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpl7q\" (UniqueName: \"kubernetes.io/projected/28615161-74b3-41e8-b457-b23588b3af6d-kube-api-access-gpl7q\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.951029 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/28615161-74b3-41e8-b457-b23588b3af6d-ca-trust-extracted\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.951595 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28615161-74b3-41e8-b457-b23588b3af6d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.951835 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28615161-74b3-41e8-b457-b23588b3af6d-bound-sa-token\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.952022 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28615161-74b3-41e8-b457-b23588b3af6d-registry-tls\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.952181 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28615161-74b3-41e8-b457-b23588b3af6d-registry-certificates\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.952290 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28615161-74b3-41e8-b457-b23588b3af6d-trusted-ca\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.953245 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/28615161-74b3-41e8-b457-b23588b3af6d-registry-certificates\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.954061 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/28615161-74b3-41e8-b457-b23588b3af6d-trusted-ca\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.956669 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/28615161-74b3-41e8-b457-b23588b3af6d-installation-pull-secrets\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.956787 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/28615161-74b3-41e8-b457-b23588b3af6d-registry-tls\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.968903 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/28615161-74b3-41e8-b457-b23588b3af6d-bound-sa-token\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:21 crc kubenswrapper[4786]: I1209 08:49:21.969165 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpl7q\" (UniqueName: \"kubernetes.io/projected/28615161-74b3-41e8-b457-b23588b3af6d-kube-api-access-gpl7q\") pod \"image-registry-66df7c8f76-hhhsd\" (UID: \"28615161-74b3-41e8-b457-b23588b3af6d\") " pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:22 crc kubenswrapper[4786]: I1209 08:49:22.066811 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:22 crc kubenswrapper[4786]: I1209 08:49:22.534947 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-hhhsd"] Dec 09 08:49:22 crc kubenswrapper[4786]: I1209 08:49:22.957574 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" event={"ID":"28615161-74b3-41e8-b457-b23588b3af6d","Type":"ContainerStarted","Data":"f0cb81946cebcf8f0f172ee8c8738778b79bd7daa2dd85b3207f0afb6450de3b"} Dec 09 08:49:22 crc kubenswrapper[4786]: I1209 08:49:22.957954 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:22 crc kubenswrapper[4786]: I1209 08:49:22.957972 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" event={"ID":"28615161-74b3-41e8-b457-b23588b3af6d","Type":"ContainerStarted","Data":"66eac1cd55a26b5b7542a3d3a086dc38db8fca6592b4944edee4a1a791e54bd7"} Dec 09 08:49:22 crc kubenswrapper[4786]: I1209 08:49:22.978209 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" podStartSLOduration=1.978187753 podStartE2EDuration="1.978187753s" podCreationTimestamp="2025-12-09 08:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:49:22.977822914 +0000 UTC m=+328.861444140" watchObservedRunningTime="2025-12-09 08:49:22.978187753 +0000 UTC m=+328.861808989" Dec 09 08:49:42 crc kubenswrapper[4786]: I1209 08:49:42.075072 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-hhhsd" Dec 09 08:49:42 crc kubenswrapper[4786]: I1209 08:49:42.134330 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q5r5p"] Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.305356 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc"] Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.306481 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" podUID="2cb9b960-1ba7-4b46-a6d2-0196186fbac2" containerName="route-controller-manager" containerID="cri-o://6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992" gracePeriod=30 Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.772304 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.901869 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-serving-cert\") pod \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.902053 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-client-ca\") pod \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.902092 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5t9j\" (UniqueName: \"kubernetes.io/projected/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-kube-api-access-m5t9j\") pod \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.902122 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-config\") pod \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\" (UID: \"2cb9b960-1ba7-4b46-a6d2-0196186fbac2\") " Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.902979 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-client-ca" (OuterVolumeSpecName: "client-ca") pod "2cb9b960-1ba7-4b46-a6d2-0196186fbac2" (UID: "2cb9b960-1ba7-4b46-a6d2-0196186fbac2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.903683 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-config" (OuterVolumeSpecName: "config") pod "2cb9b960-1ba7-4b46-a6d2-0196186fbac2" (UID: "2cb9b960-1ba7-4b46-a6d2-0196186fbac2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.908488 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2cb9b960-1ba7-4b46-a6d2-0196186fbac2" (UID: "2cb9b960-1ba7-4b46-a6d2-0196186fbac2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:49:49 crc kubenswrapper[4786]: I1209 08:49:49.908698 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-kube-api-access-m5t9j" (OuterVolumeSpecName: "kube-api-access-m5t9j") pod "2cb9b960-1ba7-4b46-a6d2-0196186fbac2" (UID: "2cb9b960-1ba7-4b46-a6d2-0196186fbac2"). InnerVolumeSpecName "kube-api-access-m5t9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.002998 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.003331 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.003344 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5t9j\" (UniqueName: \"kubernetes.io/projected/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-kube-api-access-m5t9j\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.003359 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2cb9b960-1ba7-4b46-a6d2-0196186fbac2-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.124657 4786 generic.go:334] "Generic (PLEG): container finished" podID="2cb9b960-1ba7-4b46-a6d2-0196186fbac2" containerID="6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992" exitCode=0 Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.124712 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" event={"ID":"2cb9b960-1ba7-4b46-a6d2-0196186fbac2","Type":"ContainerDied","Data":"6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992"} Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.124784 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" event={"ID":"2cb9b960-1ba7-4b46-a6d2-0196186fbac2","Type":"ContainerDied","Data":"3dc6e27d3af35cba5207fc37be1758aaea15b4aca669af352444d0f86831d2ae"} Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.124811 4786 scope.go:117] "RemoveContainer" containerID="6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.124865 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.150943 4786 scope.go:117] "RemoveContainer" containerID="6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992" Dec 09 08:49:50 crc kubenswrapper[4786]: E1209 08:49:50.151479 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992\": container with ID starting with 6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992 not found: ID does not exist" containerID="6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.151541 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992"} err="failed to get container status \"6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992\": rpc error: code = NotFound desc = could not find container \"6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992\": container with ID starting with 6963b84ebb449489281dbe02d91844bd37df8591b837acfbabda47d1900ab992 not found: ID does not exist" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.165149 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc"] Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.174090 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-657648d9fc-wsngc"] Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.473879 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd"] Dec 09 08:49:50 crc kubenswrapper[4786]: E1209 08:49:50.474269 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cb9b960-1ba7-4b46-a6d2-0196186fbac2" containerName="route-controller-manager" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.474309 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cb9b960-1ba7-4b46-a6d2-0196186fbac2" containerName="route-controller-manager" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.474564 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cb9b960-1ba7-4b46-a6d2-0196186fbac2" containerName="route-controller-manager" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.475048 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.478596 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.478702 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.478613 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.480672 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.480761 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.480928 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.487222 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd"] Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.610727 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fvwb\" (UniqueName: \"kubernetes.io/projected/000025ec-3051-4d79-97ff-ec0554b26a88-kube-api-access-6fvwb\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.611910 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/000025ec-3051-4d79-97ff-ec0554b26a88-client-ca\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.612230 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000025ec-3051-4d79-97ff-ec0554b26a88-config\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.612406 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/000025ec-3051-4d79-97ff-ec0554b26a88-serving-cert\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.726147 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000025ec-3051-4d79-97ff-ec0554b26a88-config\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.726208 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/000025ec-3051-4d79-97ff-ec0554b26a88-serving-cert\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.726244 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fvwb\" (UniqueName: \"kubernetes.io/projected/000025ec-3051-4d79-97ff-ec0554b26a88-kube-api-access-6fvwb\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.726281 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/000025ec-3051-4d79-97ff-ec0554b26a88-client-ca\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.727189 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/000025ec-3051-4d79-97ff-ec0554b26a88-client-ca\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.728176 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/000025ec-3051-4d79-97ff-ec0554b26a88-config\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.745529 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/000025ec-3051-4d79-97ff-ec0554b26a88-serving-cert\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.759668 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fvwb\" (UniqueName: \"kubernetes.io/projected/000025ec-3051-4d79-97ff-ec0554b26a88-kube-api-access-6fvwb\") pod \"route-controller-manager-7b66c684b5-xkfbd\" (UID: \"000025ec-3051-4d79-97ff-ec0554b26a88\") " pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:50 crc kubenswrapper[4786]: I1209 08:49:50.815178 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:51 crc kubenswrapper[4786]: I1209 08:49:51.201163 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cb9b960-1ba7-4b46-a6d2-0196186fbac2" path="/var/lib/kubelet/pods/2cb9b960-1ba7-4b46-a6d2-0196186fbac2/volumes" Dec 09 08:49:51 crc kubenswrapper[4786]: I1209 08:49:51.260378 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd"] Dec 09 08:49:52 crc kubenswrapper[4786]: I1209 08:49:52.139958 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" event={"ID":"000025ec-3051-4d79-97ff-ec0554b26a88","Type":"ContainerStarted","Data":"68f45ce9300e2f57f383870716bc2dc9da7cb52854fd9260daf227d5cfd591a2"} Dec 09 08:49:52 crc kubenswrapper[4786]: I1209 08:49:52.140313 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" event={"ID":"000025ec-3051-4d79-97ff-ec0554b26a88","Type":"ContainerStarted","Data":"6ffc5a1381b930fd4ac17962f8f9b7d368824e4b667bb2d2020ae410cbd7aab8"} Dec 09 08:49:52 crc kubenswrapper[4786]: I1209 08:49:52.140338 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:52 crc kubenswrapper[4786]: I1209 08:49:52.148716 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" Dec 09 08:49:52 crc kubenswrapper[4786]: I1209 08:49:52.158131 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7b66c684b5-xkfbd" podStartSLOduration=3.15811131 podStartE2EDuration="3.15811131s" podCreationTimestamp="2025-12-09 08:49:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:49:52.155680078 +0000 UTC m=+358.039301384" watchObservedRunningTime="2025-12-09 08:49:52.15811131 +0000 UTC m=+358.041732546" Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.744969 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4qx2"] Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.745883 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g4qx2" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerName="registry-server" containerID="cri-o://da14222d5521c04bcdb066364819347a664312f01a73d8394cbd8fa1e5b9c751" gracePeriod=30 Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.763367 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rm92"] Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.763702 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4rm92" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerName="registry-server" containerID="cri-o://254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8" gracePeriod=30 Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.770694 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-thfng"] Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.771057 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" containerID="cri-o://a0340fc378e6c3a380597a7b45ad1e5bca554dce19e16f1fd545cefc886d235f" gracePeriod=30 Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.781791 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9274k"] Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.782097 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9274k" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerName="registry-server" containerID="cri-o://7c0b69be8677e792650faf38d3fb25a65f450e824cd8cc8afeb22bc8b5de2f8d" gracePeriod=30 Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.799841 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9b7s"] Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.800177 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d9b7s" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerName="registry-server" containerID="cri-o://063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3" gracePeriod=30 Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.803361 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q75q9"] Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.804437 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.821762 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q75q9"] Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.868623 4786 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-thfng container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.868714 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.898163 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gg8hr\" (UniqueName: \"kubernetes.io/projected/a62ae8ec-1904-4074-9c5d-76d6bde47df8-kube-api-access-gg8hr\") pod \"marketplace-operator-79b997595-q75q9\" (UID: \"a62ae8ec-1904-4074-9c5d-76d6bde47df8\") " pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.898295 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a62ae8ec-1904-4074-9c5d-76d6bde47df8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q75q9\" (UID: \"a62ae8ec-1904-4074-9c5d-76d6bde47df8\") " pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.898653 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a62ae8ec-1904-4074-9c5d-76d6bde47df8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q75q9\" (UID: \"a62ae8ec-1904-4074-9c5d-76d6bde47df8\") " pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.989089 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:49:54 crc kubenswrapper[4786]: I1209 08:49:54.989322 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.000526 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gg8hr\" (UniqueName: \"kubernetes.io/projected/a62ae8ec-1904-4074-9c5d-76d6bde47df8-kube-api-access-gg8hr\") pod \"marketplace-operator-79b997595-q75q9\" (UID: \"a62ae8ec-1904-4074-9c5d-76d6bde47df8\") " pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.000606 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a62ae8ec-1904-4074-9c5d-76d6bde47df8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q75q9\" (UID: \"a62ae8ec-1904-4074-9c5d-76d6bde47df8\") " pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.000641 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a62ae8ec-1904-4074-9c5d-76d6bde47df8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q75q9\" (UID: \"a62ae8ec-1904-4074-9c5d-76d6bde47df8\") " pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.002622 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a62ae8ec-1904-4074-9c5d-76d6bde47df8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-q75q9\" (UID: \"a62ae8ec-1904-4074-9c5d-76d6bde47df8\") " pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.008542 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/a62ae8ec-1904-4074-9c5d-76d6bde47df8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-q75q9\" (UID: \"a62ae8ec-1904-4074-9c5d-76d6bde47df8\") " pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.023108 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gg8hr\" (UniqueName: \"kubernetes.io/projected/a62ae8ec-1904-4074-9c5d-76d6bde47df8-kube-api-access-gg8hr\") pod \"marketplace-operator-79b997595-q75q9\" (UID: \"a62ae8ec-1904-4074-9c5d-76d6bde47df8\") " pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.148014 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:55 crc kubenswrapper[4786]: E1209 08:49:55.178573 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8 is running failed: container process not found" containerID="254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 08:49:55 crc kubenswrapper[4786]: E1209 08:49:55.181445 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8 is running failed: container process not found" containerID="254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 08:49:55 crc kubenswrapper[4786]: E1209 08:49:55.181789 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8 is running failed: container process not found" containerID="254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8" cmd=["grpc_health_probe","-addr=:50051"] Dec 09 08:49:55 crc kubenswrapper[4786]: E1209 08:49:55.181820 4786 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-4rm92" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerName="registry-server" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.215019 4786 generic.go:334] "Generic (PLEG): container finished" podID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerID="da14222d5521c04bcdb066364819347a664312f01a73d8394cbd8fa1e5b9c751" exitCode=0 Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.215100 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4qx2" event={"ID":"7a4abbd7-999e-4b15-bfc1-a93939734b36","Type":"ContainerDied","Data":"da14222d5521c04bcdb066364819347a664312f01a73d8394cbd8fa1e5b9c751"} Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.242939 4786 generic.go:334] "Generic (PLEG): container finished" podID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerID="7c0b69be8677e792650faf38d3fb25a65f450e824cd8cc8afeb22bc8b5de2f8d" exitCode=0 Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.246595 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9274k" event={"ID":"621ad974-644c-45fc-a6ba-045ca1f9e033","Type":"ContainerDied","Data":"7c0b69be8677e792650faf38d3fb25a65f450e824cd8cc8afeb22bc8b5de2f8d"} Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.254070 4786 generic.go:334] "Generic (PLEG): container finished" podID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerID="254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8" exitCode=0 Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.254137 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rm92" event={"ID":"7388aae9-507a-42ff-84cb-9860de1f9f84","Type":"ContainerDied","Data":"254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8"} Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.257223 4786 generic.go:334] "Generic (PLEG): container finished" podID="b128968a-caa6-46be-be15-79971a310e5c" containerID="a0340fc378e6c3a380597a7b45ad1e5bca554dce19e16f1fd545cefc886d235f" exitCode=0 Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.257278 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" event={"ID":"b128968a-caa6-46be-be15-79971a310e5c","Type":"ContainerDied","Data":"a0340fc378e6c3a380597a7b45ad1e5bca554dce19e16f1fd545cefc886d235f"} Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.257309 4786 scope.go:117] "RemoveContainer" containerID="8b284375f145834322161af06139a89e6edbd55a682391331b64b5ffd97b6e9a" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.267229 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.368360 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.369699 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.407652 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-catalog-content\") pod \"7a4abbd7-999e-4b15-bfc1-a93939734b36\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.407696 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxbz7\" (UniqueName: \"kubernetes.io/projected/7a4abbd7-999e-4b15-bfc1-a93939734b36-kube-api-access-rxbz7\") pod \"7a4abbd7-999e-4b15-bfc1-a93939734b36\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.407824 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-utilities\") pod \"7a4abbd7-999e-4b15-bfc1-a93939734b36\" (UID: \"7a4abbd7-999e-4b15-bfc1-a93939734b36\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.408886 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-utilities" (OuterVolumeSpecName: "utilities") pod "7a4abbd7-999e-4b15-bfc1-a93939734b36" (UID: "7a4abbd7-999e-4b15-bfc1-a93939734b36"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.415085 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a4abbd7-999e-4b15-bfc1-a93939734b36-kube-api-access-rxbz7" (OuterVolumeSpecName: "kube-api-access-rxbz7") pod "7a4abbd7-999e-4b15-bfc1-a93939734b36" (UID: "7a4abbd7-999e-4b15-bfc1-a93939734b36"). InnerVolumeSpecName "kube-api-access-rxbz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.468140 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7a4abbd7-999e-4b15-bfc1-a93939734b36" (UID: "7a4abbd7-999e-4b15-bfc1-a93939734b36"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.473628 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-q75q9"] Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.509549 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-utilities\") pod \"621ad974-644c-45fc-a6ba-045ca1f9e033\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.509692 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q69gw\" (UniqueName: \"kubernetes.io/projected/b128968a-caa6-46be-be15-79971a310e5c-kube-api-access-q69gw\") pod \"b128968a-caa6-46be-be15-79971a310e5c\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.509733 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4jdq\" (UniqueName: \"kubernetes.io/projected/621ad974-644c-45fc-a6ba-045ca1f9e033-kube-api-access-l4jdq\") pod \"621ad974-644c-45fc-a6ba-045ca1f9e033\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.509776 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-catalog-content\") pod \"621ad974-644c-45fc-a6ba-045ca1f9e033\" (UID: \"621ad974-644c-45fc-a6ba-045ca1f9e033\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.509801 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b128968a-caa6-46be-be15-79971a310e5c-marketplace-trusted-ca\") pod \"b128968a-caa6-46be-be15-79971a310e5c\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.509842 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b128968a-caa6-46be-be15-79971a310e5c-marketplace-operator-metrics\") pod \"b128968a-caa6-46be-be15-79971a310e5c\" (UID: \"b128968a-caa6-46be-be15-79971a310e5c\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.510258 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.510273 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7a4abbd7-999e-4b15-bfc1-a93939734b36-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.510287 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxbz7\" (UniqueName: \"kubernetes.io/projected/7a4abbd7-999e-4b15-bfc1-a93939734b36-kube-api-access-rxbz7\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.513959 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b128968a-caa6-46be-be15-79971a310e5c-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b128968a-caa6-46be-be15-79971a310e5c" (UID: "b128968a-caa6-46be-be15-79971a310e5c"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.514274 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b128968a-caa6-46be-be15-79971a310e5c-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b128968a-caa6-46be-be15-79971a310e5c" (UID: "b128968a-caa6-46be-be15-79971a310e5c"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.515486 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-utilities" (OuterVolumeSpecName: "utilities") pod "621ad974-644c-45fc-a6ba-045ca1f9e033" (UID: "621ad974-644c-45fc-a6ba-045ca1f9e033"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.518342 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b128968a-caa6-46be-be15-79971a310e5c-kube-api-access-q69gw" (OuterVolumeSpecName: "kube-api-access-q69gw") pod "b128968a-caa6-46be-be15-79971a310e5c" (UID: "b128968a-caa6-46be-be15-79971a310e5c"). InnerVolumeSpecName "kube-api-access-q69gw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.525932 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/621ad974-644c-45fc-a6ba-045ca1f9e033-kube-api-access-l4jdq" (OuterVolumeSpecName: "kube-api-access-l4jdq") pod "621ad974-644c-45fc-a6ba-045ca1f9e033" (UID: "621ad974-644c-45fc-a6ba-045ca1f9e033"). InnerVolumeSpecName "kube-api-access-l4jdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.577727 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "621ad974-644c-45fc-a6ba-045ca1f9e033" (UID: "621ad974-644c-45fc-a6ba-045ca1f9e033"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.615621 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q69gw\" (UniqueName: \"kubernetes.io/projected/b128968a-caa6-46be-be15-79971a310e5c-kube-api-access-q69gw\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.615662 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4jdq\" (UniqueName: \"kubernetes.io/projected/621ad974-644c-45fc-a6ba-045ca1f9e033-kube-api-access-l4jdq\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.615672 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b128968a-caa6-46be-be15-79971a310e5c-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.615682 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.615692 4786 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b128968a-caa6-46be-be15-79971a310e5c-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.615702 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/621ad974-644c-45fc-a6ba-045ca1f9e033-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.669783 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.817283 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-catalog-content\") pod \"7388aae9-507a-42ff-84cb-9860de1f9f84\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.817380 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-utilities\") pod \"7388aae9-507a-42ff-84cb-9860de1f9f84\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.817449 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rdd8\" (UniqueName: \"kubernetes.io/projected/7388aae9-507a-42ff-84cb-9860de1f9f84-kube-api-access-8rdd8\") pod \"7388aae9-507a-42ff-84cb-9860de1f9f84\" (UID: \"7388aae9-507a-42ff-84cb-9860de1f9f84\") " Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.818978 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-utilities" (OuterVolumeSpecName: "utilities") pod "7388aae9-507a-42ff-84cb-9860de1f9f84" (UID: "7388aae9-507a-42ff-84cb-9860de1f9f84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.822680 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7388aae9-507a-42ff-84cb-9860de1f9f84-kube-api-access-8rdd8" (OuterVolumeSpecName: "kube-api-access-8rdd8") pod "7388aae9-507a-42ff-84cb-9860de1f9f84" (UID: "7388aae9-507a-42ff-84cb-9860de1f9f84"). InnerVolumeSpecName "kube-api-access-8rdd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.879768 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7388aae9-507a-42ff-84cb-9860de1f9f84" (UID: "7388aae9-507a-42ff-84cb-9860de1f9f84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.919006 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.919066 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rdd8\" (UniqueName: \"kubernetes.io/projected/7388aae9-507a-42ff-84cb-9860de1f9f84-kube-api-access-8rdd8\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.919086 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7388aae9-507a-42ff-84cb-9860de1f9f84-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:55 crc kubenswrapper[4786]: I1209 08:49:55.957463 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.121794 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2vhs\" (UniqueName: \"kubernetes.io/projected/7d9968ce-71c9-4b6d-912f-5f03be10945d-kube-api-access-n2vhs\") pod \"7d9968ce-71c9-4b6d-912f-5f03be10945d\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.122207 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-catalog-content\") pod \"7d9968ce-71c9-4b6d-912f-5f03be10945d\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.122330 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-utilities\") pod \"7d9968ce-71c9-4b6d-912f-5f03be10945d\" (UID: \"7d9968ce-71c9-4b6d-912f-5f03be10945d\") " Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.123292 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-utilities" (OuterVolumeSpecName: "utilities") pod "7d9968ce-71c9-4b6d-912f-5f03be10945d" (UID: "7d9968ce-71c9-4b6d-912f-5f03be10945d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.134703 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d9968ce-71c9-4b6d-912f-5f03be10945d-kube-api-access-n2vhs" (OuterVolumeSpecName: "kube-api-access-n2vhs") pod "7d9968ce-71c9-4b6d-912f-5f03be10945d" (UID: "7d9968ce-71c9-4b6d-912f-5f03be10945d"). InnerVolumeSpecName "kube-api-access-n2vhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.223773 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.223830 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2vhs\" (UniqueName: \"kubernetes.io/projected/7d9968ce-71c9-4b6d-912f-5f03be10945d-kube-api-access-n2vhs\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.237591 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d9968ce-71c9-4b6d-912f-5f03be10945d" (UID: "7d9968ce-71c9-4b6d-912f-5f03be10945d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.267172 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4rm92" event={"ID":"7388aae9-507a-42ff-84cb-9860de1f9f84","Type":"ContainerDied","Data":"fa6bf785304ca48ef9506a9500062db48982085efe2776e5713787e41d596d65"} Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.267523 4786 scope.go:117] "RemoveContainer" containerID="254a5067697310fa5d98aa71f20fff0a7326c8f926cd2fead0878f7650823fc8" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.267232 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4rm92" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.270779 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" event={"ID":"b128968a-caa6-46be-be15-79971a310e5c","Type":"ContainerDied","Data":"171cb8d8b7827cbcac3ebae06ce7d5b91499de4ff6d7eeae91e33786722bf012"} Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.271267 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-thfng" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.275481 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g4qx2" event={"ID":"7a4abbd7-999e-4b15-bfc1-a93939734b36","Type":"ContainerDied","Data":"b98f0118bdc4f114e2c957c0b38c587c154ec50d92539e1303874da6d43c24a3"} Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.275654 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g4qx2" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.278828 4786 generic.go:334] "Generic (PLEG): container finished" podID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerID="063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3" exitCode=0 Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.279107 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9b7s" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.279222 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9b7s" event={"ID":"7d9968ce-71c9-4b6d-912f-5f03be10945d","Type":"ContainerDied","Data":"063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3"} Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.279275 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9b7s" event={"ID":"7d9968ce-71c9-4b6d-912f-5f03be10945d","Type":"ContainerDied","Data":"6e372757b8040c7b27b5608dcf5541380cb7584737b2b30e72cc61e1f417117c"} Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.281437 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9274k" event={"ID":"621ad974-644c-45fc-a6ba-045ca1f9e033","Type":"ContainerDied","Data":"6336b3ac6d131d761fc3859924c758c2d28b88bdce59c4bab062c0f94766796d"} Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.281640 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9274k" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.289550 4786 scope.go:117] "RemoveContainer" containerID="2eb4676477cca01b31f68ff80bda42d993c14f73a95c0bce76b454872c251609" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.298757 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" event={"ID":"a62ae8ec-1904-4074-9c5d-76d6bde47df8","Type":"ContainerStarted","Data":"412c6b575ff4a10021bccd91cf935279b9d48e60cb07ffd7873e6427289f1fb8"} Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.298831 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" event={"ID":"a62ae8ec-1904-4074-9c5d-76d6bde47df8","Type":"ContainerStarted","Data":"843bf50c33e75912fbde243931d6184557ed69a9c752cc71dcbfb3761808c576"} Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.299152 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.308828 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.315656 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-q75q9" podStartSLOduration=2.315631833 podStartE2EDuration="2.315631833s" podCreationTimestamp="2025-12-09 08:49:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:49:56.315475349 +0000 UTC m=+362.199096615" watchObservedRunningTime="2025-12-09 08:49:56.315631833 +0000 UTC m=+362.199253059" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.322695 4786 scope.go:117] "RemoveContainer" containerID="506843de98d41e3592dacc9fb8150f9bdb939764e2fafb61aa299292cbe51c15" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.324521 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d9968ce-71c9-4b6d-912f-5f03be10945d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.370440 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-thfng"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.371207 4786 scope.go:117] "RemoveContainer" containerID="a0340fc378e6c3a380597a7b45ad1e5bca554dce19e16f1fd545cefc886d235f" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.376825 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-thfng"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.401769 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9274k"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.402756 4786 scope.go:117] "RemoveContainer" containerID="da14222d5521c04bcdb066364819347a664312f01a73d8394cbd8fa1e5b9c751" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.410504 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9274k"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.419532 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d9b7s"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.424376 4786 scope.go:117] "RemoveContainer" containerID="3811ac7fc898903acb1e71d01fc804a2222455bffb79bba3bcb8a8fe81e2dea9" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.430587 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d9b7s"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.436234 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g4qx2"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.439349 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g4qx2"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.441067 4786 scope.go:117] "RemoveContainer" containerID="21ba71531a7e71faa901b89ec16c8c786b28b3d82b013ba7911b713bd96f1dea" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.441995 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4rm92"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.444624 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4rm92"] Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.452885 4786 scope.go:117] "RemoveContainer" containerID="063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.466232 4786 scope.go:117] "RemoveContainer" containerID="814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.483621 4786 scope.go:117] "RemoveContainer" containerID="608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.499567 4786 scope.go:117] "RemoveContainer" containerID="063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3" Dec 09 08:49:56 crc kubenswrapper[4786]: E1209 08:49:56.500198 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3\": container with ID starting with 063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3 not found: ID does not exist" containerID="063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.500253 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3"} err="failed to get container status \"063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3\": rpc error: code = NotFound desc = could not find container \"063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3\": container with ID starting with 063afdb10dd4fedda5caea0b4521eb75980833ecfbe40f58e3dcbe4b460ab8d3 not found: ID does not exist" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.500294 4786 scope.go:117] "RemoveContainer" containerID="814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b" Dec 09 08:49:56 crc kubenswrapper[4786]: E1209 08:49:56.500797 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b\": container with ID starting with 814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b not found: ID does not exist" containerID="814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.500853 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b"} err="failed to get container status \"814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b\": rpc error: code = NotFound desc = could not find container \"814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b\": container with ID starting with 814773d1f4e42abb73e11c9b10690fb508e2d6ae40da3d7fbc350bfd92c7bc7b not found: ID does not exist" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.500942 4786 scope.go:117] "RemoveContainer" containerID="608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910" Dec 09 08:49:56 crc kubenswrapper[4786]: E1209 08:49:56.501484 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910\": container with ID starting with 608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910 not found: ID does not exist" containerID="608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.501515 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910"} err="failed to get container status \"608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910\": rpc error: code = NotFound desc = could not find container \"608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910\": container with ID starting with 608377a3e6e6f54b6d8dea9ec043ba75390133bcf1ea70fa86b426a6017ff910 not found: ID does not exist" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.501530 4786 scope.go:117] "RemoveContainer" containerID="7c0b69be8677e792650faf38d3fb25a65f450e824cd8cc8afeb22bc8b5de2f8d" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.515565 4786 scope.go:117] "RemoveContainer" containerID="e31046b9edfb0c0229ddbfabc27b997e03514ae2ae117811270863b86b123bb5" Dec 09 08:49:56 crc kubenswrapper[4786]: I1209 08:49:56.535476 4786 scope.go:117] "RemoveContainer" containerID="10034f72990723562893bae3d391eb9d5ac3678241d593e5ef0f392cea129b7b" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.199843 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" path="/var/lib/kubelet/pods/621ad974-644c-45fc-a6ba-045ca1f9e033/volumes" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.201335 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" path="/var/lib/kubelet/pods/7388aae9-507a-42ff-84cb-9860de1f9f84/volumes" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.202256 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" path="/var/lib/kubelet/pods/7a4abbd7-999e-4b15-bfc1-a93939734b36/volumes" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.203712 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" path="/var/lib/kubelet/pods/7d9968ce-71c9-4b6d-912f-5f03be10945d/volumes" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.204469 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b128968a-caa6-46be-be15-79971a310e5c" path="/var/lib/kubelet/pods/b128968a-caa6-46be-be15-79971a310e5c/volumes" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347028 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ftk6v"] Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347358 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347383 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347395 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347402 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347412 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerName="extract-content" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347420 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerName="extract-content" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347453 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerName="extract-utilities" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347460 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerName="extract-utilities" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347471 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347477 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347486 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerName="extract-content" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347493 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerName="extract-content" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347507 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347513 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347526 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347532 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347540 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347547 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347557 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerName="extract-utilities" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.347563 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerName="extract-utilities" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.347896 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerName="extract-utilities" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.348500 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerName="extract-utilities" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.348532 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerName="extract-utilities" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.348539 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerName="extract-utilities" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.348554 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerName="extract-content" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.348562 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerName="extract-content" Dec 09 08:49:57 crc kubenswrapper[4786]: E1209 08:49:57.348577 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerName="extract-content" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.348585 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerName="extract-content" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.349335 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7388aae9-507a-42ff-84cb-9860de1f9f84" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.349364 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a4abbd7-999e-4b15-bfc1-a93939734b36" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.349372 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.349386 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d9968ce-71c9-4b6d-912f-5f03be10945d" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.349394 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="621ad974-644c-45fc-a6ba-045ca1f9e033" containerName="registry-server" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.349400 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b128968a-caa6-46be-be15-79971a310e5c" containerName="marketplace-operator" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.355683 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.359604 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.366699 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftk6v"] Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.444657 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfft\" (UniqueName: \"kubernetes.io/projected/40c9dcfe-3d88-4560-a91e-639f6e0b661e-kube-api-access-4cfft\") pod \"community-operators-ftk6v\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.444988 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-utilities\") pod \"community-operators-ftk6v\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.445243 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-catalog-content\") pod \"community-operators-ftk6v\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.546380 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-catalog-content\") pod \"community-operators-ftk6v\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.546493 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cfft\" (UniqueName: \"kubernetes.io/projected/40c9dcfe-3d88-4560-a91e-639f6e0b661e-kube-api-access-4cfft\") pod \"community-operators-ftk6v\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.546541 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-utilities\") pod \"community-operators-ftk6v\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.546961 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-catalog-content\") pod \"community-operators-ftk6v\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.547006 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-utilities\") pod \"community-operators-ftk6v\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.569805 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cfft\" (UniqueName: \"kubernetes.io/projected/40c9dcfe-3d88-4560-a91e-639f6e0b661e-kube-api-access-4cfft\") pod \"community-operators-ftk6v\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.683690 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:49:57 crc kubenswrapper[4786]: I1209 08:49:57.899098 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ftk6v"] Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.333479 4786 generic.go:334] "Generic (PLEG): container finished" podID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerID="1cb9b47d638c5a16b5f32823174061b26d726c6a115a9c9e13d60deca6b8bdf3" exitCode=0 Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.333591 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftk6v" event={"ID":"40c9dcfe-3d88-4560-a91e-639f6e0b661e","Type":"ContainerDied","Data":"1cb9b47d638c5a16b5f32823174061b26d726c6a115a9c9e13d60deca6b8bdf3"} Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.333912 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftk6v" event={"ID":"40c9dcfe-3d88-4560-a91e-639f6e0b661e","Type":"ContainerStarted","Data":"bac33852918e7efe2822ce72e3bc931372bc2d889d96371874ae9d3af921b8e4"} Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.746117 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5lt4q"] Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.747328 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.749620 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.757827 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lt4q"] Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.865218 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb-utilities\") pod \"redhat-marketplace-5lt4q\" (UID: \"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb\") " pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.865291 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb-catalog-content\") pod \"redhat-marketplace-5lt4q\" (UID: \"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb\") " pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.865337 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg584\" (UniqueName: \"kubernetes.io/projected/d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb-kube-api-access-tg584\") pod \"redhat-marketplace-5lt4q\" (UID: \"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb\") " pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.967104 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb-utilities\") pod \"redhat-marketplace-5lt4q\" (UID: \"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb\") " pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.967165 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb-catalog-content\") pod \"redhat-marketplace-5lt4q\" (UID: \"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb\") " pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.967206 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg584\" (UniqueName: \"kubernetes.io/projected/d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb-kube-api-access-tg584\") pod \"redhat-marketplace-5lt4q\" (UID: \"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb\") " pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.967896 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb-utilities\") pod \"redhat-marketplace-5lt4q\" (UID: \"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb\") " pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.967952 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb-catalog-content\") pod \"redhat-marketplace-5lt4q\" (UID: \"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb\") " pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:58 crc kubenswrapper[4786]: I1209 08:49:58.998417 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg584\" (UniqueName: \"kubernetes.io/projected/d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb-kube-api-access-tg584\") pod \"redhat-marketplace-5lt4q\" (UID: \"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb\") " pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.064592 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.300983 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lt4q"] Dec 09 08:49:59 crc kubenswrapper[4786]: W1209 08:49:59.308820 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0a09b57_1b41_45ff_9d41_aa0c5d1b4beb.slice/crio-72dbe0b0bce2662c04558268ae99e99196d78becf48879c8ca04f3ef13890d1d WatchSource:0}: Error finding container 72dbe0b0bce2662c04558268ae99e99196d78becf48879c8ca04f3ef13890d1d: Status 404 returned error can't find the container with id 72dbe0b0bce2662c04558268ae99e99196d78becf48879c8ca04f3ef13890d1d Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.358075 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lt4q" event={"ID":"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb","Type":"ContainerStarted","Data":"72dbe0b0bce2662c04558268ae99e99196d78becf48879c8ca04f3ef13890d1d"} Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.752824 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s4hdj"] Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.754893 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.756026 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4hdj"] Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.757639 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.890497 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcdf7\" (UniqueName: \"kubernetes.io/projected/011c88f1-1d58-4ad3-8835-29020b9f4e8d-kube-api-access-kcdf7\") pod \"redhat-operators-s4hdj\" (UID: \"011c88f1-1d58-4ad3-8835-29020b9f4e8d\") " pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.890558 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011c88f1-1d58-4ad3-8835-29020b9f4e8d-catalog-content\") pod \"redhat-operators-s4hdj\" (UID: \"011c88f1-1d58-4ad3-8835-29020b9f4e8d\") " pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.890614 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011c88f1-1d58-4ad3-8835-29020b9f4e8d-utilities\") pod \"redhat-operators-s4hdj\" (UID: \"011c88f1-1d58-4ad3-8835-29020b9f4e8d\") " pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.992682 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcdf7\" (UniqueName: \"kubernetes.io/projected/011c88f1-1d58-4ad3-8835-29020b9f4e8d-kube-api-access-kcdf7\") pod \"redhat-operators-s4hdj\" (UID: \"011c88f1-1d58-4ad3-8835-29020b9f4e8d\") " pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.992777 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011c88f1-1d58-4ad3-8835-29020b9f4e8d-catalog-content\") pod \"redhat-operators-s4hdj\" (UID: \"011c88f1-1d58-4ad3-8835-29020b9f4e8d\") " pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.992858 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011c88f1-1d58-4ad3-8835-29020b9f4e8d-utilities\") pod \"redhat-operators-s4hdj\" (UID: \"011c88f1-1d58-4ad3-8835-29020b9f4e8d\") " pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.993446 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/011c88f1-1d58-4ad3-8835-29020b9f4e8d-catalog-content\") pod \"redhat-operators-s4hdj\" (UID: \"011c88f1-1d58-4ad3-8835-29020b9f4e8d\") " pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:49:59 crc kubenswrapper[4786]: I1209 08:49:59.993586 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/011c88f1-1d58-4ad3-8835-29020b9f4e8d-utilities\") pod \"redhat-operators-s4hdj\" (UID: \"011c88f1-1d58-4ad3-8835-29020b9f4e8d\") " pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:50:00 crc kubenswrapper[4786]: I1209 08:50:00.022341 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcdf7\" (UniqueName: \"kubernetes.io/projected/011c88f1-1d58-4ad3-8835-29020b9f4e8d-kube-api-access-kcdf7\") pod \"redhat-operators-s4hdj\" (UID: \"011c88f1-1d58-4ad3-8835-29020b9f4e8d\") " pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:50:00 crc kubenswrapper[4786]: I1209 08:50:00.080351 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:50:00 crc kubenswrapper[4786]: I1209 08:50:00.367827 4786 generic.go:334] "Generic (PLEG): container finished" podID="d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb" containerID="96774167341e7d99b544050850fc2d38dc5215b840bb780d151c84b9ebf9c2ed" exitCode=0 Dec 09 08:50:00 crc kubenswrapper[4786]: I1209 08:50:00.369193 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lt4q" event={"ID":"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb","Type":"ContainerDied","Data":"96774167341e7d99b544050850fc2d38dc5215b840bb780d151c84b9ebf9c2ed"} Dec 09 08:50:00 crc kubenswrapper[4786]: I1209 08:50:00.596728 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4hdj"] Dec 09 08:50:00 crc kubenswrapper[4786]: W1209 08:50:00.612577 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod011c88f1_1d58_4ad3_8835_29020b9f4e8d.slice/crio-1f6b73cdc3687ab248768e16e6db341ff343f451fffb291f012cdaec956c4a58 WatchSource:0}: Error finding container 1f6b73cdc3687ab248768e16e6db341ff343f451fffb291f012cdaec956c4a58: Status 404 returned error can't find the container with id 1f6b73cdc3687ab248768e16e6db341ff343f451fffb291f012cdaec956c4a58 Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.148105 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-49cfg"] Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.149909 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.154301 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.159367 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-49cfg"] Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.312712 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zr2jp\" (UniqueName: \"kubernetes.io/projected/eebcd979-a1cb-4994-ba4a-cadfaeb57401-kube-api-access-zr2jp\") pod \"certified-operators-49cfg\" (UID: \"eebcd979-a1cb-4994-ba4a-cadfaeb57401\") " pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.313130 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebcd979-a1cb-4994-ba4a-cadfaeb57401-catalog-content\") pod \"certified-operators-49cfg\" (UID: \"eebcd979-a1cb-4994-ba4a-cadfaeb57401\") " pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.313213 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebcd979-a1cb-4994-ba4a-cadfaeb57401-utilities\") pod \"certified-operators-49cfg\" (UID: \"eebcd979-a1cb-4994-ba4a-cadfaeb57401\") " pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.376057 4786 generic.go:334] "Generic (PLEG): container finished" podID="011c88f1-1d58-4ad3-8835-29020b9f4e8d" containerID="2c6a4442748d1b70d51315a6aa2fa10526aa0a37655ff33704f38cba419161b4" exitCode=0 Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.376137 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4hdj" event={"ID":"011c88f1-1d58-4ad3-8835-29020b9f4e8d","Type":"ContainerDied","Data":"2c6a4442748d1b70d51315a6aa2fa10526aa0a37655ff33704f38cba419161b4"} Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.376807 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4hdj" event={"ID":"011c88f1-1d58-4ad3-8835-29020b9f4e8d","Type":"ContainerStarted","Data":"1f6b73cdc3687ab248768e16e6db341ff343f451fffb291f012cdaec956c4a58"} Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.414511 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zr2jp\" (UniqueName: \"kubernetes.io/projected/eebcd979-a1cb-4994-ba4a-cadfaeb57401-kube-api-access-zr2jp\") pod \"certified-operators-49cfg\" (UID: \"eebcd979-a1cb-4994-ba4a-cadfaeb57401\") " pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.414577 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebcd979-a1cb-4994-ba4a-cadfaeb57401-catalog-content\") pod \"certified-operators-49cfg\" (UID: \"eebcd979-a1cb-4994-ba4a-cadfaeb57401\") " pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.414623 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebcd979-a1cb-4994-ba4a-cadfaeb57401-utilities\") pod \"certified-operators-49cfg\" (UID: \"eebcd979-a1cb-4994-ba4a-cadfaeb57401\") " pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.415064 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eebcd979-a1cb-4994-ba4a-cadfaeb57401-catalog-content\") pod \"certified-operators-49cfg\" (UID: \"eebcd979-a1cb-4994-ba4a-cadfaeb57401\") " pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.416181 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eebcd979-a1cb-4994-ba4a-cadfaeb57401-utilities\") pod \"certified-operators-49cfg\" (UID: \"eebcd979-a1cb-4994-ba4a-cadfaeb57401\") " pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.433100 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zr2jp\" (UniqueName: \"kubernetes.io/projected/eebcd979-a1cb-4994-ba4a-cadfaeb57401-kube-api-access-zr2jp\") pod \"certified-operators-49cfg\" (UID: \"eebcd979-a1cb-4994-ba4a-cadfaeb57401\") " pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:01 crc kubenswrapper[4786]: I1209 08:50:01.479698 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:02 crc kubenswrapper[4786]: I1209 08:50:02.868685 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-49cfg"] Dec 09 08:50:02 crc kubenswrapper[4786]: W1209 08:50:02.928245 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeebcd979_a1cb_4994_ba4a_cadfaeb57401.slice/crio-39b20e26caa4123c60be4e7eb773f107e86febe124f08497fecbb892e85cd586 WatchSource:0}: Error finding container 39b20e26caa4123c60be4e7eb773f107e86febe124f08497fecbb892e85cd586: Status 404 returned error can't find the container with id 39b20e26caa4123c60be4e7eb773f107e86febe124f08497fecbb892e85cd586 Dec 09 08:50:03 crc kubenswrapper[4786]: I1209 08:50:03.392780 4786 generic.go:334] "Generic (PLEG): container finished" podID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerID="49f7d371ce9c7448bfde1ba88cce63231e9617b48ab9770404880b7882eaf1c5" exitCode=0 Dec 09 08:50:03 crc kubenswrapper[4786]: I1209 08:50:03.392882 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftk6v" event={"ID":"40c9dcfe-3d88-4560-a91e-639f6e0b661e","Type":"ContainerDied","Data":"49f7d371ce9c7448bfde1ba88cce63231e9617b48ab9770404880b7882eaf1c5"} Dec 09 08:50:03 crc kubenswrapper[4786]: I1209 08:50:03.398328 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4hdj" event={"ID":"011c88f1-1d58-4ad3-8835-29020b9f4e8d","Type":"ContainerStarted","Data":"fafc9a3a83b641fbf752236a891a26b79694e8eeca0ab8897ee8cf2209419de6"} Dec 09 08:50:03 crc kubenswrapper[4786]: I1209 08:50:03.406918 4786 generic.go:334] "Generic (PLEG): container finished" podID="d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb" containerID="eea10aeec8f4e57f07154f10f0407e4955670e188eea267e9cb2d4df161dd962" exitCode=0 Dec 09 08:50:03 crc kubenswrapper[4786]: I1209 08:50:03.407012 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lt4q" event={"ID":"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb","Type":"ContainerDied","Data":"eea10aeec8f4e57f07154f10f0407e4955670e188eea267e9cb2d4df161dd962"} Dec 09 08:50:03 crc kubenswrapper[4786]: I1209 08:50:03.408936 4786 generic.go:334] "Generic (PLEG): container finished" podID="eebcd979-a1cb-4994-ba4a-cadfaeb57401" containerID="fa2c3562de9d00b87979851edabbc2048a275dd0eba06a489c31caf99583dc50" exitCode=0 Dec 09 08:50:03 crc kubenswrapper[4786]: I1209 08:50:03.409013 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49cfg" event={"ID":"eebcd979-a1cb-4994-ba4a-cadfaeb57401","Type":"ContainerDied","Data":"fa2c3562de9d00b87979851edabbc2048a275dd0eba06a489c31caf99583dc50"} Dec 09 08:50:03 crc kubenswrapper[4786]: I1209 08:50:03.409053 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49cfg" event={"ID":"eebcd979-a1cb-4994-ba4a-cadfaeb57401","Type":"ContainerStarted","Data":"39b20e26caa4123c60be4e7eb773f107e86febe124f08497fecbb892e85cd586"} Dec 09 08:50:04 crc kubenswrapper[4786]: I1209 08:50:04.417164 4786 generic.go:334] "Generic (PLEG): container finished" podID="011c88f1-1d58-4ad3-8835-29020b9f4e8d" containerID="fafc9a3a83b641fbf752236a891a26b79694e8eeca0ab8897ee8cf2209419de6" exitCode=0 Dec 09 08:50:04 crc kubenswrapper[4786]: I1209 08:50:04.417887 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4hdj" event={"ID":"011c88f1-1d58-4ad3-8835-29020b9f4e8d","Type":"ContainerDied","Data":"fafc9a3a83b641fbf752236a891a26b79694e8eeca0ab8897ee8cf2209419de6"} Dec 09 08:50:05 crc kubenswrapper[4786]: I1209 08:50:05.425552 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftk6v" event={"ID":"40c9dcfe-3d88-4560-a91e-639f6e0b661e","Type":"ContainerStarted","Data":"5a6b30ecb5fde5a7d5aca04b3965899680a669ccafb10a242a44bfb8dddf07c1"} Dec 09 08:50:05 crc kubenswrapper[4786]: I1209 08:50:05.428171 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4hdj" event={"ID":"011c88f1-1d58-4ad3-8835-29020b9f4e8d","Type":"ContainerStarted","Data":"f2419b450114a05237acf3c818103905f0aa0633ff7b121013efcc3a4936c4d8"} Dec 09 08:50:05 crc kubenswrapper[4786]: I1209 08:50:05.434016 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lt4q" event={"ID":"d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb","Type":"ContainerStarted","Data":"9154af6720408b4252f18f5cb6e80e3af32c6fff4683dab827ffd43ede22b091"} Dec 09 08:50:05 crc kubenswrapper[4786]: I1209 08:50:05.438898 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49cfg" event={"ID":"eebcd979-a1cb-4994-ba4a-cadfaeb57401","Type":"ContainerStarted","Data":"4410bea5de4713f98cbf9482b6c865a412aca3d61149208ff1b9407660485a62"} Dec 09 08:50:05 crc kubenswrapper[4786]: I1209 08:50:05.469732 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ftk6v" podStartSLOduration=2.976551254 podStartE2EDuration="8.469708671s" podCreationTimestamp="2025-12-09 08:49:57 +0000 UTC" firstStartedPulling="2025-12-09 08:49:58.335995785 +0000 UTC m=+364.219617001" lastFinishedPulling="2025-12-09 08:50:03.829153202 +0000 UTC m=+369.712774418" observedRunningTime="2025-12-09 08:50:05.448249734 +0000 UTC m=+371.331870960" watchObservedRunningTime="2025-12-09 08:50:05.469708671 +0000 UTC m=+371.353329887" Dec 09 08:50:05 crc kubenswrapper[4786]: I1209 08:50:05.515771 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s4hdj" podStartSLOduration=4.166961579 podStartE2EDuration="6.515754315s" podCreationTimestamp="2025-12-09 08:49:59 +0000 UTC" firstStartedPulling="2025-12-09 08:50:02.477791677 +0000 UTC m=+368.361412903" lastFinishedPulling="2025-12-09 08:50:04.826584413 +0000 UTC m=+370.710205639" observedRunningTime="2025-12-09 08:50:05.494702158 +0000 UTC m=+371.378323384" watchObservedRunningTime="2025-12-09 08:50:05.515754315 +0000 UTC m=+371.399375541" Dec 09 08:50:05 crc kubenswrapper[4786]: I1209 08:50:05.517294 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5lt4q" podStartSLOduration=4.051208051 podStartE2EDuration="7.517287854s" podCreationTimestamp="2025-12-09 08:49:58 +0000 UTC" firstStartedPulling="2025-12-09 08:50:00.370641252 +0000 UTC m=+366.254262488" lastFinishedPulling="2025-12-09 08:50:03.836721065 +0000 UTC m=+369.720342291" observedRunningTime="2025-12-09 08:50:05.51401299 +0000 UTC m=+371.397634226" watchObservedRunningTime="2025-12-09 08:50:05.517287854 +0000 UTC m=+371.400909080" Dec 09 08:50:06 crc kubenswrapper[4786]: I1209 08:50:06.446945 4786 generic.go:334] "Generic (PLEG): container finished" podID="eebcd979-a1cb-4994-ba4a-cadfaeb57401" containerID="4410bea5de4713f98cbf9482b6c865a412aca3d61149208ff1b9407660485a62" exitCode=0 Dec 09 08:50:06 crc kubenswrapper[4786]: I1209 08:50:06.447166 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49cfg" event={"ID":"eebcd979-a1cb-4994-ba4a-cadfaeb57401","Type":"ContainerDied","Data":"4410bea5de4713f98cbf9482b6c865a412aca3d61149208ff1b9407660485a62"} Dec 09 08:50:06 crc kubenswrapper[4786]: I1209 08:50:06.447519 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-49cfg" event={"ID":"eebcd979-a1cb-4994-ba4a-cadfaeb57401","Type":"ContainerStarted","Data":"e2464b101249e0bda5082a5f54070fd44dab1d5d5f05cc7fc396f82340c80894"} Dec 09 08:50:06 crc kubenswrapper[4786]: I1209 08:50:06.484323 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-49cfg" podStartSLOduration=3.086125303 podStartE2EDuration="5.484303679s" podCreationTimestamp="2025-12-09 08:50:01 +0000 UTC" firstStartedPulling="2025-12-09 08:50:03.414332425 +0000 UTC m=+369.297953641" lastFinishedPulling="2025-12-09 08:50:05.812510781 +0000 UTC m=+371.696132017" observedRunningTime="2025-12-09 08:50:06.48003673 +0000 UTC m=+372.363657956" watchObservedRunningTime="2025-12-09 08:50:06.484303679 +0000 UTC m=+372.367924905" Dec 09 08:50:07 crc kubenswrapper[4786]: I1209 08:50:07.174138 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" podUID="742a103a-06e2-4d52-8c04-54681052838d" containerName="registry" containerID="cri-o://7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a" gracePeriod=30 Dec 09 08:50:07 crc kubenswrapper[4786]: I1209 08:50:07.683958 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:50:07 crc kubenswrapper[4786]: I1209 08:50:07.685199 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:50:07 crc kubenswrapper[4786]: I1209 08:50:07.724849 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.080138 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.228659 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/742a103a-06e2-4d52-8c04-54681052838d-installation-pull-secrets\") pod \"742a103a-06e2-4d52-8c04-54681052838d\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.228853 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"742a103a-06e2-4d52-8c04-54681052838d\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.228929 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-trusted-ca\") pod \"742a103a-06e2-4d52-8c04-54681052838d\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.228959 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/742a103a-06e2-4d52-8c04-54681052838d-ca-trust-extracted\") pod \"742a103a-06e2-4d52-8c04-54681052838d\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.229000 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8775t\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-kube-api-access-8775t\") pod \"742a103a-06e2-4d52-8c04-54681052838d\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.229015 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-registry-tls\") pod \"742a103a-06e2-4d52-8c04-54681052838d\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.229049 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-bound-sa-token\") pod \"742a103a-06e2-4d52-8c04-54681052838d\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.229069 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-registry-certificates\") pod \"742a103a-06e2-4d52-8c04-54681052838d\" (UID: \"742a103a-06e2-4d52-8c04-54681052838d\") " Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.230943 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "742a103a-06e2-4d52-8c04-54681052838d" (UID: "742a103a-06e2-4d52-8c04-54681052838d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.235437 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "742a103a-06e2-4d52-8c04-54681052838d" (UID: "742a103a-06e2-4d52-8c04-54681052838d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.238922 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/742a103a-06e2-4d52-8c04-54681052838d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "742a103a-06e2-4d52-8c04-54681052838d" (UID: "742a103a-06e2-4d52-8c04-54681052838d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.239130 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "742a103a-06e2-4d52-8c04-54681052838d" (UID: "742a103a-06e2-4d52-8c04-54681052838d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.239720 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "742a103a-06e2-4d52-8c04-54681052838d" (UID: "742a103a-06e2-4d52-8c04-54681052838d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.247061 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-kube-api-access-8775t" (OuterVolumeSpecName: "kube-api-access-8775t") pod "742a103a-06e2-4d52-8c04-54681052838d" (UID: "742a103a-06e2-4d52-8c04-54681052838d"). InnerVolumeSpecName "kube-api-access-8775t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.251276 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/742a103a-06e2-4d52-8c04-54681052838d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "742a103a-06e2-4d52-8c04-54681052838d" (UID: "742a103a-06e2-4d52-8c04-54681052838d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.257561 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "742a103a-06e2-4d52-8c04-54681052838d" (UID: "742a103a-06e2-4d52-8c04-54681052838d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.330566 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8775t\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-kube-api-access-8775t\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.330992 4786 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.331013 4786 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/742a103a-06e2-4d52-8c04-54681052838d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.331036 4786 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.331048 4786 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/742a103a-06e2-4d52-8c04-54681052838d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.331060 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/742a103a-06e2-4d52-8c04-54681052838d-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.331068 4786 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/742a103a-06e2-4d52-8c04-54681052838d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.459041 4786 generic.go:334] "Generic (PLEG): container finished" podID="742a103a-06e2-4d52-8c04-54681052838d" containerID="7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a" exitCode=0 Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.459572 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.459571 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" event={"ID":"742a103a-06e2-4d52-8c04-54681052838d","Type":"ContainerDied","Data":"7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a"} Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.459714 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-q5r5p" event={"ID":"742a103a-06e2-4d52-8c04-54681052838d","Type":"ContainerDied","Data":"768ed804e8f5ea0d7af8a7baad140ef9eb217bc69c2a5d8535c981e76a99ba13"} Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.459758 4786 scope.go:117] "RemoveContainer" containerID="7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.486620 4786 scope.go:117] "RemoveContainer" containerID="7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a" Dec 09 08:50:08 crc kubenswrapper[4786]: E1209 08:50:08.487393 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a\": container with ID starting with 7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a not found: ID does not exist" containerID="7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.487451 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a"} err="failed to get container status \"7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a\": rpc error: code = NotFound desc = could not find container \"7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a\": container with ID starting with 7e38de0ddca53efda8c8dbbd23b604735d7dccbeb6f944fc836a30c3845a231a not found: ID does not exist" Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.495702 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q5r5p"] Dec 09 08:50:08 crc kubenswrapper[4786]: I1209 08:50:08.502925 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-q5r5p"] Dec 09 08:50:09 crc kubenswrapper[4786]: I1209 08:50:09.067685 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:50:09 crc kubenswrapper[4786]: I1209 08:50:09.067766 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:50:09 crc kubenswrapper[4786]: I1209 08:50:09.111591 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:50:09 crc kubenswrapper[4786]: I1209 08:50:09.193693 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="742a103a-06e2-4d52-8c04-54681052838d" path="/var/lib/kubelet/pods/742a103a-06e2-4d52-8c04-54681052838d/volumes" Dec 09 08:50:09 crc kubenswrapper[4786]: I1209 08:50:09.280264 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f784d6689-qxpvh"] Dec 09 08:50:09 crc kubenswrapper[4786]: I1209 08:50:09.280558 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" podUID="7a288121-94b5-4438-a0d0-c416914c161c" containerName="controller-manager" containerID="cri-o://98519755d0b4bd613f475bd8979b255db1f6a0689843253e17ecaf482c367b74" gracePeriod=30 Dec 09 08:50:10 crc kubenswrapper[4786]: I1209 08:50:10.081444 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:50:10 crc kubenswrapper[4786]: I1209 08:50:10.081762 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.132174 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s4hdj" podUID="011c88f1-1d58-4ad3-8835-29020b9f4e8d" containerName="registry-server" probeResult="failure" output=< Dec 09 08:50:11 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 08:50:11 crc kubenswrapper[4786]: > Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.479816 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.479861 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.481720 4786 generic.go:334] "Generic (PLEG): container finished" podID="7a288121-94b5-4438-a0d0-c416914c161c" containerID="98519755d0b4bd613f475bd8979b255db1f6a0689843253e17ecaf482c367b74" exitCode=0 Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.481794 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" event={"ID":"7a288121-94b5-4438-a0d0-c416914c161c","Type":"ContainerDied","Data":"98519755d0b4bd613f475bd8979b255db1f6a0689843253e17ecaf482c367b74"} Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.531642 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.595991 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.631958 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-75ff58fc4c-j62nb"] Dec 09 08:50:11 crc kubenswrapper[4786]: E1209 08:50:11.632447 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a288121-94b5-4438-a0d0-c416914c161c" containerName="controller-manager" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.632477 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a288121-94b5-4438-a0d0-c416914c161c" containerName="controller-manager" Dec 09 08:50:11 crc kubenswrapper[4786]: E1209 08:50:11.632499 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="742a103a-06e2-4d52-8c04-54681052838d" containerName="registry" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.632513 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="742a103a-06e2-4d52-8c04-54681052838d" containerName="registry" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.632665 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="742a103a-06e2-4d52-8c04-54681052838d" containerName="registry" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.632680 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a288121-94b5-4438-a0d0-c416914c161c" containerName="controller-manager" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.633297 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.642102 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75ff58fc4c-j62nb"] Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.778919 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a288121-94b5-4438-a0d0-c416914c161c-serving-cert\") pod \"7a288121-94b5-4438-a0d0-c416914c161c\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.779077 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-config\") pod \"7a288121-94b5-4438-a0d0-c416914c161c\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.779162 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqrn4\" (UniqueName: \"kubernetes.io/projected/7a288121-94b5-4438-a0d0-c416914c161c-kube-api-access-dqrn4\") pod \"7a288121-94b5-4438-a0d0-c416914c161c\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.779241 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-client-ca\") pod \"7a288121-94b5-4438-a0d0-c416914c161c\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.779295 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-proxy-ca-bundles\") pod \"7a288121-94b5-4438-a0d0-c416914c161c\" (UID: \"7a288121-94b5-4438-a0d0-c416914c161c\") " Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.779533 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-config\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.779615 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-client-ca\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.779710 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-proxy-ca-bundles\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.779745 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9ddw\" (UniqueName: \"kubernetes.io/projected/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-kube-api-access-f9ddw\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.779784 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-serving-cert\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.780315 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-client-ca" (OuterVolumeSpecName: "client-ca") pod "7a288121-94b5-4438-a0d0-c416914c161c" (UID: "7a288121-94b5-4438-a0d0-c416914c161c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.780387 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-config" (OuterVolumeSpecName: "config") pod "7a288121-94b5-4438-a0d0-c416914c161c" (UID: "7a288121-94b5-4438-a0d0-c416914c161c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.780476 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7a288121-94b5-4438-a0d0-c416914c161c" (UID: "7a288121-94b5-4438-a0d0-c416914c161c"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.785609 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a288121-94b5-4438-a0d0-c416914c161c-kube-api-access-dqrn4" (OuterVolumeSpecName: "kube-api-access-dqrn4") pod "7a288121-94b5-4438-a0d0-c416914c161c" (UID: "7a288121-94b5-4438-a0d0-c416914c161c"). InnerVolumeSpecName "kube-api-access-dqrn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.787535 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a288121-94b5-4438-a0d0-c416914c161c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7a288121-94b5-4438-a0d0-c416914c161c" (UID: "7a288121-94b5-4438-a0d0-c416914c161c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.881697 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-config\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.881791 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-client-ca\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.881853 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-proxy-ca-bundles\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.881892 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9ddw\" (UniqueName: \"kubernetes.io/projected/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-kube-api-access-f9ddw\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.881931 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-serving-cert\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.881982 4786 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-client-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.882003 4786 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.882016 4786 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7a288121-94b5-4438-a0d0-c416914c161c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.882026 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a288121-94b5-4438-a0d0-c416914c161c-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.882036 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqrn4\" (UniqueName: \"kubernetes.io/projected/7a288121-94b5-4438-a0d0-c416914c161c-kube-api-access-dqrn4\") on node \"crc\" DevicePath \"\"" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.884306 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-proxy-ca-bundles\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.884366 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-client-ca\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.884957 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-config\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.886015 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-serving-cert\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.901955 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9ddw\" (UniqueName: \"kubernetes.io/projected/d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905-kube-api-access-f9ddw\") pod \"controller-manager-75ff58fc4c-j62nb\" (UID: \"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905\") " pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:11 crc kubenswrapper[4786]: I1209 08:50:11.951093 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:12 crc kubenswrapper[4786]: I1209 08:50:12.389335 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-75ff58fc4c-j62nb"] Dec 09 08:50:12 crc kubenswrapper[4786]: W1209 08:50:12.398785 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2896c6a_b0cb_4e9e_bd9b_fbc7c28b5905.slice/crio-911e7464021394e605f1732607a1c7ddd97d78c2d4e6a032f16520ccb19cf469 WatchSource:0}: Error finding container 911e7464021394e605f1732607a1c7ddd97d78c2d4e6a032f16520ccb19cf469: Status 404 returned error can't find the container with id 911e7464021394e605f1732607a1c7ddd97d78c2d4e6a032f16520ccb19cf469 Dec 09 08:50:12 crc kubenswrapper[4786]: I1209 08:50:12.497110 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" event={"ID":"7a288121-94b5-4438-a0d0-c416914c161c","Type":"ContainerDied","Data":"e568cc0d109a6bcc750dd081bcd0a6e5b0389d6c59b6c364c12f48d89afbe494"} Dec 09 08:50:12 crc kubenswrapper[4786]: I1209 08:50:12.497164 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5f784d6689-qxpvh" Dec 09 08:50:12 crc kubenswrapper[4786]: I1209 08:50:12.497175 4786 scope.go:117] "RemoveContainer" containerID="98519755d0b4bd613f475bd8979b255db1f6a0689843253e17ecaf482c367b74" Dec 09 08:50:12 crc kubenswrapper[4786]: I1209 08:50:12.502581 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" event={"ID":"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905","Type":"ContainerStarted","Data":"911e7464021394e605f1732607a1c7ddd97d78c2d4e6a032f16520ccb19cf469"} Dec 09 08:50:12 crc kubenswrapper[4786]: I1209 08:50:12.531574 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5f784d6689-qxpvh"] Dec 09 08:50:12 crc kubenswrapper[4786]: I1209 08:50:12.536206 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5f784d6689-qxpvh"] Dec 09 08:50:12 crc kubenswrapper[4786]: I1209 08:50:12.550124 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-49cfg" Dec 09 08:50:13 crc kubenswrapper[4786]: I1209 08:50:13.198169 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a288121-94b5-4438-a0d0-c416914c161c" path="/var/lib/kubelet/pods/7a288121-94b5-4438-a0d0-c416914c161c/volumes" Dec 09 08:50:13 crc kubenswrapper[4786]: I1209 08:50:13.511966 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" event={"ID":"d2896c6a-b0cb-4e9e-bd9b-fbc7c28b5905","Type":"ContainerStarted","Data":"d68e78293f31b8fdb607e4952556bbb1b0a58bec6d26b3622d7f5668d13ec652"} Dec 09 08:50:13 crc kubenswrapper[4786]: I1209 08:50:13.512523 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:13 crc kubenswrapper[4786]: I1209 08:50:13.521032 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" Dec 09 08:50:13 crc kubenswrapper[4786]: I1209 08:50:13.545088 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-75ff58fc4c-j62nb" podStartSLOduration=4.545066064 podStartE2EDuration="4.545066064s" podCreationTimestamp="2025-12-09 08:50:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:50:13.538731933 +0000 UTC m=+379.422353169" watchObservedRunningTime="2025-12-09 08:50:13.545066064 +0000 UTC m=+379.428687290" Dec 09 08:50:17 crc kubenswrapper[4786]: I1209 08:50:17.754341 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ftk6v" Dec 09 08:50:19 crc kubenswrapper[4786]: I1209 08:50:19.126395 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5lt4q" Dec 09 08:50:20 crc kubenswrapper[4786]: I1209 08:50:20.125839 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:50:20 crc kubenswrapper[4786]: I1209 08:50:20.194104 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s4hdj" Dec 09 08:50:24 crc kubenswrapper[4786]: I1209 08:50:24.989186 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:50:24 crc kubenswrapper[4786]: I1209 08:50:24.989869 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:50:54 crc kubenswrapper[4786]: I1209 08:50:54.988822 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:50:54 crc kubenswrapper[4786]: I1209 08:50:54.989387 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:50:54 crc kubenswrapper[4786]: I1209 08:50:54.989495 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:50:54 crc kubenswrapper[4786]: I1209 08:50:54.990208 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29646cad7eaa0024c8d0fa3ab2acef12d4e3de7d8dcb6aee6b00df4225f53d60"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 08:50:54 crc kubenswrapper[4786]: I1209 08:50:54.990320 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://29646cad7eaa0024c8d0fa3ab2acef12d4e3de7d8dcb6aee6b00df4225f53d60" gracePeriod=600 Dec 09 08:50:55 crc kubenswrapper[4786]: I1209 08:50:55.801608 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="29646cad7eaa0024c8d0fa3ab2acef12d4e3de7d8dcb6aee6b00df4225f53d60" exitCode=0 Dec 09 08:50:55 crc kubenswrapper[4786]: I1209 08:50:55.801717 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"29646cad7eaa0024c8d0fa3ab2acef12d4e3de7d8dcb6aee6b00df4225f53d60"} Dec 09 08:50:55 crc kubenswrapper[4786]: I1209 08:50:55.802265 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"691b25d832c85f5d738ce211194caa8f083f24ddcca29e3361b0baf6122f9bf9"} Dec 09 08:50:55 crc kubenswrapper[4786]: I1209 08:50:55.802309 4786 scope.go:117] "RemoveContainer" containerID="de20e796c231cbc6d525978434351e46d932f7d99236a97c078b8147ce5dabd5" Dec 09 08:53:24 crc kubenswrapper[4786]: I1209 08:53:24.989370 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:53:24 crc kubenswrapper[4786]: I1209 08:53:24.990381 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:53:54 crc kubenswrapper[4786]: I1209 08:53:54.989119 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:53:54 crc kubenswrapper[4786]: I1209 08:53:54.989736 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:54:24 crc kubenswrapper[4786]: I1209 08:54:24.989328 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:54:24 crc kubenswrapper[4786]: I1209 08:54:24.990375 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:54:24 crc kubenswrapper[4786]: I1209 08:54:24.990547 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:54:24 crc kubenswrapper[4786]: I1209 08:54:24.991482 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"691b25d832c85f5d738ce211194caa8f083f24ddcca29e3361b0baf6122f9bf9"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 08:54:24 crc kubenswrapper[4786]: I1209 08:54:24.991588 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://691b25d832c85f5d738ce211194caa8f083f24ddcca29e3361b0baf6122f9bf9" gracePeriod=600 Dec 09 08:54:25 crc kubenswrapper[4786]: I1209 08:54:25.272637 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="691b25d832c85f5d738ce211194caa8f083f24ddcca29e3361b0baf6122f9bf9" exitCode=0 Dec 09 08:54:25 crc kubenswrapper[4786]: I1209 08:54:25.272712 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"691b25d832c85f5d738ce211194caa8f083f24ddcca29e3361b0baf6122f9bf9"} Dec 09 08:54:25 crc kubenswrapper[4786]: I1209 08:54:25.273203 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"c1a37e915d2cb26d1e193f254aff1e4395db9084078cc5d3a08303be12c3b59b"} Dec 09 08:54:25 crc kubenswrapper[4786]: I1209 08:54:25.273257 4786 scope.go:117] "RemoveContainer" containerID="29646cad7eaa0024c8d0fa3ab2acef12d4e3de7d8dcb6aee6b00df4225f53d60" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.343988 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-g22j7"] Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.345428 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-g22j7" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.348013 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-n2t2q" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.349780 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.350642 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.352960 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-mhk9l"] Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.353900 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-mhk9l" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.355651 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-xtzcf" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.365693 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-g22j7"] Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.371160 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-57g7f"] Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.371972 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.375725 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-mhk9l"] Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.377225 4786 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-wt6tn" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.386583 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-57g7f"] Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.519350 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9grpm\" (UniqueName: \"kubernetes.io/projected/e590bb55-a521-4368-b048-ebc34e6dc46c-kube-api-access-9grpm\") pod \"cert-manager-5b446d88c5-mhk9l\" (UID: \"e590bb55-a521-4368-b048-ebc34e6dc46c\") " pod="cert-manager/cert-manager-5b446d88c5-mhk9l" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.519739 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6zvq\" (UniqueName: \"kubernetes.io/projected/5b2baefe-3aa8-48ec-b66a-173a0eb33c22-kube-api-access-s6zvq\") pod \"cert-manager-webhook-5655c58dd6-57g7f\" (UID: \"5b2baefe-3aa8-48ec-b66a-173a0eb33c22\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.519867 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgt7x\" (UniqueName: \"kubernetes.io/projected/4a583e5a-0f3b-496b-89d5-fe79f697b730-kube-api-access-fgt7x\") pod \"cert-manager-cainjector-7f985d654d-g22j7\" (UID: \"4a583e5a-0f3b-496b-89d5-fe79f697b730\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-g22j7" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.621290 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fgt7x\" (UniqueName: \"kubernetes.io/projected/4a583e5a-0f3b-496b-89d5-fe79f697b730-kube-api-access-fgt7x\") pod \"cert-manager-cainjector-7f985d654d-g22j7\" (UID: \"4a583e5a-0f3b-496b-89d5-fe79f697b730\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-g22j7" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.621395 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9grpm\" (UniqueName: \"kubernetes.io/projected/e590bb55-a521-4368-b048-ebc34e6dc46c-kube-api-access-9grpm\") pod \"cert-manager-5b446d88c5-mhk9l\" (UID: \"e590bb55-a521-4368-b048-ebc34e6dc46c\") " pod="cert-manager/cert-manager-5b446d88c5-mhk9l" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.621471 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6zvq\" (UniqueName: \"kubernetes.io/projected/5b2baefe-3aa8-48ec-b66a-173a0eb33c22-kube-api-access-s6zvq\") pod \"cert-manager-webhook-5655c58dd6-57g7f\" (UID: \"5b2baefe-3aa8-48ec-b66a-173a0eb33c22\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.639580 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6zvq\" (UniqueName: \"kubernetes.io/projected/5b2baefe-3aa8-48ec-b66a-173a0eb33c22-kube-api-access-s6zvq\") pod \"cert-manager-webhook-5655c58dd6-57g7f\" (UID: \"5b2baefe-3aa8-48ec-b66a-173a0eb33c22\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.640211 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9grpm\" (UniqueName: \"kubernetes.io/projected/e590bb55-a521-4368-b048-ebc34e6dc46c-kube-api-access-9grpm\") pod \"cert-manager-5b446d88c5-mhk9l\" (UID: \"e590bb55-a521-4368-b048-ebc34e6dc46c\") " pod="cert-manager/cert-manager-5b446d88c5-mhk9l" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.640864 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fgt7x\" (UniqueName: \"kubernetes.io/projected/4a583e5a-0f3b-496b-89d5-fe79f697b730-kube-api-access-fgt7x\") pod \"cert-manager-cainjector-7f985d654d-g22j7\" (UID: \"4a583e5a-0f3b-496b-89d5-fe79f697b730\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-g22j7" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.670166 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-g22j7" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.684196 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-mhk9l" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.693103 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.961780 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-mhk9l"] Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.974670 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 08:55:32 crc kubenswrapper[4786]: I1209 08:55:32.995422 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-57g7f"] Dec 09 08:55:33 crc kubenswrapper[4786]: W1209 08:55:33.000008 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b2baefe_3aa8_48ec_b66a_173a0eb33c22.slice/crio-fe1f970634854e5b9a92c72b8df383a9130d9a12832a7e96ad906fc83a1e4707 WatchSource:0}: Error finding container fe1f970634854e5b9a92c72b8df383a9130d9a12832a7e96ad906fc83a1e4707: Status 404 returned error can't find the container with id fe1f970634854e5b9a92c72b8df383a9130d9a12832a7e96ad906fc83a1e4707 Dec 09 08:55:33 crc kubenswrapper[4786]: I1209 08:55:33.097488 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-g22j7"] Dec 09 08:55:33 crc kubenswrapper[4786]: W1209 08:55:33.100115 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a583e5a_0f3b_496b_89d5_fe79f697b730.slice/crio-534a18673170233ba123a2458d432954a0a5f86e873763b4adcdb41de0e41d99 WatchSource:0}: Error finding container 534a18673170233ba123a2458d432954a0a5f86e873763b4adcdb41de0e41d99: Status 404 returned error can't find the container with id 534a18673170233ba123a2458d432954a0a5f86e873763b4adcdb41de0e41d99 Dec 09 08:55:33 crc kubenswrapper[4786]: I1209 08:55:33.699459 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-mhk9l" event={"ID":"e590bb55-a521-4368-b048-ebc34e6dc46c","Type":"ContainerStarted","Data":"d7e0dc703900b02d4cea226abd95c409e369eddc5f22ba66415f4206871195b5"} Dec 09 08:55:33 crc kubenswrapper[4786]: I1209 08:55:33.701308 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-g22j7" event={"ID":"4a583e5a-0f3b-496b-89d5-fe79f697b730","Type":"ContainerStarted","Data":"534a18673170233ba123a2458d432954a0a5f86e873763b4adcdb41de0e41d99"} Dec 09 08:55:33 crc kubenswrapper[4786]: I1209 08:55:33.701990 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" event={"ID":"5b2baefe-3aa8-48ec-b66a-173a0eb33c22","Type":"ContainerStarted","Data":"fe1f970634854e5b9a92c72b8df383a9130d9a12832a7e96ad906fc83a1e4707"} Dec 09 08:55:37 crc kubenswrapper[4786]: I1209 08:55:37.727566 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-mhk9l" event={"ID":"e590bb55-a521-4368-b048-ebc34e6dc46c","Type":"ContainerStarted","Data":"47ae806419ae3a0f57787a36665952ff8c15b0bb37bb5f7ef3b2dd5a43a414e9"} Dec 09 08:55:37 crc kubenswrapper[4786]: I1209 08:55:37.729095 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-g22j7" event={"ID":"4a583e5a-0f3b-496b-89d5-fe79f697b730","Type":"ContainerStarted","Data":"a3a8c256f1c5c0acd28f6cfdd001edd4dfa2154be0d70c7200394afcbaec40be"} Dec 09 08:55:37 crc kubenswrapper[4786]: I1209 08:55:37.730984 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" event={"ID":"5b2baefe-3aa8-48ec-b66a-173a0eb33c22","Type":"ContainerStarted","Data":"8827cfaa2756b8216524c53e8dc808ab2de9b8c4986728ad19181c9ea17901b3"} Dec 09 08:55:37 crc kubenswrapper[4786]: I1209 08:55:37.731271 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" Dec 09 08:55:37 crc kubenswrapper[4786]: I1209 08:55:37.747696 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-mhk9l" podStartSLOduration=1.911629963 podStartE2EDuration="5.747677109s" podCreationTimestamp="2025-12-09 08:55:32 +0000 UTC" firstStartedPulling="2025-12-09 08:55:32.974343372 +0000 UTC m=+698.857964598" lastFinishedPulling="2025-12-09 08:55:36.810390518 +0000 UTC m=+702.694011744" observedRunningTime="2025-12-09 08:55:37.740344117 +0000 UTC m=+703.623965333" watchObservedRunningTime="2025-12-09 08:55:37.747677109 +0000 UTC m=+703.631298335" Dec 09 08:55:37 crc kubenswrapper[4786]: I1209 08:55:37.756406 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-g22j7" podStartSLOduration=2.04595641 podStartE2EDuration="5.756384395s" podCreationTimestamp="2025-12-09 08:55:32 +0000 UTC" firstStartedPulling="2025-12-09 08:55:33.102688701 +0000 UTC m=+698.986309927" lastFinishedPulling="2025-12-09 08:55:36.813116656 +0000 UTC m=+702.696737912" observedRunningTime="2025-12-09 08:55:37.753196497 +0000 UTC m=+703.636817733" watchObservedRunningTime="2025-12-09 08:55:37.756384395 +0000 UTC m=+703.640005621" Dec 09 08:55:37 crc kubenswrapper[4786]: I1209 08:55:37.772275 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" podStartSLOduration=1.9546275309999999 podStartE2EDuration="5.77224935s" podCreationTimestamp="2025-12-09 08:55:32 +0000 UTC" firstStartedPulling="2025-12-09 08:55:33.002750707 +0000 UTC m=+698.886371933" lastFinishedPulling="2025-12-09 08:55:36.820372496 +0000 UTC m=+702.703993752" observedRunningTime="2025-12-09 08:55:37.771256855 +0000 UTC m=+703.654878101" watchObservedRunningTime="2025-12-09 08:55:37.77224935 +0000 UTC m=+703.655870576" Dec 09 08:55:42 crc kubenswrapper[4786]: I1209 08:55:42.696328 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-57g7f" Dec 09 08:55:42 crc kubenswrapper[4786]: I1209 08:55:42.956168 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7sr4q"] Dec 09 08:55:42 crc kubenswrapper[4786]: I1209 08:55:42.956742 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovn-controller" containerID="cri-o://3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e" gracePeriod=30 Dec 09 08:55:42 crc kubenswrapper[4786]: I1209 08:55:42.956804 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="nbdb" containerID="cri-o://093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659" gracePeriod=30 Dec 09 08:55:42 crc kubenswrapper[4786]: I1209 08:55:42.956835 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf" gracePeriod=30 Dec 09 08:55:42 crc kubenswrapper[4786]: I1209 08:55:42.956906 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kube-rbac-proxy-node" containerID="cri-o://254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2" gracePeriod=30 Dec 09 08:55:42 crc kubenswrapper[4786]: I1209 08:55:42.956959 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="sbdb" containerID="cri-o://9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c" gracePeriod=30 Dec 09 08:55:42 crc kubenswrapper[4786]: I1209 08:55:42.957014 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="northd" containerID="cri-o://49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30" gracePeriod=30 Dec 09 08:55:42 crc kubenswrapper[4786]: I1209 08:55:42.957092 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovn-acl-logging" containerID="cri-o://e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230" gracePeriod=30 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.005653 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" containerID="cri-o://0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795" gracePeriod=30 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.511917 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/3.log" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.515473 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovn-acl-logging/0.log" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.516365 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovn-controller/0.log" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.516877 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.577605 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-w2wgb"] Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.577895 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.577925 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.577941 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.577950 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.577960 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="northd" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.577969 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="northd" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.577980 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kubecfg-setup" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.577988 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kubecfg-setup" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.578002 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="nbdb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578011 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="nbdb" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.578020 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kube-rbac-proxy-node" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578028 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kube-rbac-proxy-node" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.578039 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="sbdb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578047 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="sbdb" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.578058 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovn-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578067 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovn-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.578077 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578085 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.578095 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovn-acl-logging" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578103 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovn-acl-logging" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.578116 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578124 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578296 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578317 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578330 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="nbdb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578347 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovn-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578362 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="sbdb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578374 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="northd" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578384 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kube-rbac-proxy-ovn-metrics" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578393 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovn-acl-logging" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578404 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="kube-rbac-proxy-node" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578416 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.578588 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578599 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578719 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.578851 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578861 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.578981 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerName="ovnkube-controller" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.581021 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.589606 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-bin\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.589739 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.589851 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-systemd\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.589943 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-netd\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.589969 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-node-log\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590013 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovn-node-metrics-cert\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590045 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-log-socket\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590074 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-slash\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590354 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-kubelet\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590396 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-var-lib-openvswitch\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590414 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-cni-bin\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590455 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-run-ovn\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590521 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-run-systemd\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590573 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-run-openvswitch\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590821 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-systemd-units\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590867 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-run-netns\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590894 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-log-socket\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590922 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-slash\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590950 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-etc-openvswitch\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590977 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-ovn-node-metrics-cert\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591001 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-node-log\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591024 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ll4cv\" (UniqueName: \"kubernetes.io/projected/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-kube-api-access-ll4cv\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591053 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-ovnkube-script-lib\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591087 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-run-ovn-kubernetes\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591119 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-cni-netd\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591146 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-ovnkube-config\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591171 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590646 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-node-log" (OuterVolumeSpecName: "node-log") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590656 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-slash" (OuterVolumeSpecName: "host-slash") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590682 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591364 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-env-overrides\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.590721 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-log-socket" (OuterVolumeSpecName: "log-socket") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591567 4786 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-slash\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591601 4786 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591614 4786 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.591623 4786 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-node-log\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.600204 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.607548 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.692836 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-ovn-kubernetes\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.692918 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-var-lib-cni-networks-ovn-kubernetes\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.692945 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-etc-openvswitch\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.692961 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-openvswitch\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.692992 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-script-lib\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693010 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-config\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.692985 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693017 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693254 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693265 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693328 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-systemd-units\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693386 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-netns\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693406 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksqff\" (UniqueName: \"kubernetes.io/projected/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-kube-api-access-ksqff\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693445 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-env-overrides\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693464 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-kubelet\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693485 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-var-lib-openvswitch\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693505 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-ovn\") pod \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\" (UID: \"c8ebe4be-af09-4f22-9dee-af5f7d34bccf\") " Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693627 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-run-systemd\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693665 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-run-openvswitch\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693705 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-systemd-units\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693731 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-run-netns\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693754 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-log-socket\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693781 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-slash\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693356 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693772 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693797 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693843 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-etc-openvswitch\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693857 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693893 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693950 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-run-systemd\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693981 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-run-openvswitch\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693990 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694006 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-systemd-units\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694028 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-run-netns\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694076 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-slash\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694110 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694104 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-log-socket\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.693800 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-etc-openvswitch\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694173 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-ovn-node-metrics-cert\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694195 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-node-log\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694218 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ll4cv\" (UniqueName: \"kubernetes.io/projected/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-kube-api-access-ll4cv\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694247 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-ovnkube-script-lib\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694275 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-run-ovn-kubernetes\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694301 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-cni-netd\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694324 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-ovnkube-config\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694350 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694385 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-env-overrides\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694411 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-kubelet\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694445 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-run-ovn-kubernetes\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694468 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-var-lib-openvswitch\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694493 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-node-log\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694495 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-cni-bin\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694526 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-run-ovn\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694578 4786 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694591 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694603 4786 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694615 4786 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694632 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694643 4786 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-log-socket\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694654 4786 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694665 4786 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694675 4786 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694686 4786 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694698 4786 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694711 4786 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694725 4786 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694737 4786 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694977 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-run-ovn\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.695010 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-cni-netd\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.696330 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.696365 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-var-lib-openvswitch\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.696385 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-kubelet\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.694354 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.696461 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-host-cni-bin\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.696543 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-ovnkube-config\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.696819 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-ovnkube-script-lib\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.697122 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-env-overrides\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.698213 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-kube-api-access-ksqff" (OuterVolumeSpecName: "kube-api-access-ksqff") pod "c8ebe4be-af09-4f22-9dee-af5f7d34bccf" (UID: "c8ebe4be-af09-4f22-9dee-af5f7d34bccf"). InnerVolumeSpecName "kube-api-access-ksqff". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.698719 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-ovn-node-metrics-cert\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.714921 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ll4cv\" (UniqueName: \"kubernetes.io/projected/5503bc13-9be4-4f5f-b150-a3f2c02eecf2-kube-api-access-ll4cv\") pod \"ovnkube-node-w2wgb\" (UID: \"5503bc13-9be4-4f5f-b150-a3f2c02eecf2\") " pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.778350 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovnkube-controller/3.log" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.780695 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovn-acl-logging/0.log" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781212 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-7sr4q_c8ebe4be-af09-4f22-9dee-af5f7d34bccf/ovn-controller/0.log" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781582 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795" exitCode=0 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781616 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c" exitCode=0 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781627 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659" exitCode=0 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781639 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30" exitCode=0 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781649 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf" exitCode=0 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781657 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2" exitCode=0 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781666 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230" exitCode=143 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781673 4786 generic.go:334] "Generic (PLEG): container finished" podID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" containerID="3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e" exitCode=143 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781723 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781765 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781783 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781798 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781814 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781828 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781843 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781875 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781883 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781890 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781896 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781903 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781910 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781917 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781923 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781934 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781948 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781958 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781966 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781974 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781980 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781988 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.781995 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782003 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782010 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782017 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782028 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782040 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782052 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782059 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782065 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782071 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782078 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782085 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782092 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782099 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782106 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782115 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" event={"ID":"c8ebe4be-af09-4f22-9dee-af5f7d34bccf","Type":"ContainerDied","Data":"543eeac1d6d5cfb5a9b577a58da08d0eee8f989f998d9a38d44d67f1fdc0e74f"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782126 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782134 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782140 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782147 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782153 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782160 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782168 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782175 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782183 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782197 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782215 4786 scope.go:117] "RemoveContainer" containerID="0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.782441 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-7sr4q" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.787235 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/2.log" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.789920 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/1.log" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.790365 4786 generic.go:334] "Generic (PLEG): container finished" podID="a0a865e2-8504-473d-a23f-fc682d053a9f" containerID="78b49bec42031aef55dfe0a03cdef74583598f5415193cadf25124687a40ac47" exitCode=2 Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.790415 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27hfj" event={"ID":"a0a865e2-8504-473d-a23f-fc682d053a9f","Type":"ContainerDied","Data":"78b49bec42031aef55dfe0a03cdef74583598f5415193cadf25124687a40ac47"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.790539 4786 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406"} Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.791185 4786 scope.go:117] "RemoveContainer" containerID="78b49bec42031aef55dfe0a03cdef74583598f5415193cadf25124687a40ac47" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.791556 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-27hfj_openshift-multus(a0a865e2-8504-473d-a23f-fc682d053a9f)\"" pod="openshift-multus/multus-27hfj" podUID="a0a865e2-8504-473d-a23f-fc682d053a9f" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.797097 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksqff\" (UniqueName: \"kubernetes.io/projected/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-kube-api-access-ksqff\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.797155 4786 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c8ebe4be-af09-4f22-9dee-af5f7d34bccf-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.810319 4786 scope.go:117] "RemoveContainer" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.832792 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7sr4q"] Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.843991 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-7sr4q"] Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.846741 4786 scope.go:117] "RemoveContainer" containerID="9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.862978 4786 scope.go:117] "RemoveContainer" containerID="093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.877509 4786 scope.go:117] "RemoveContainer" containerID="49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.891579 4786 scope.go:117] "RemoveContainer" containerID="597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.903644 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.905768 4786 scope.go:117] "RemoveContainer" containerID="254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.921230 4786 scope.go:117] "RemoveContainer" containerID="e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.944830 4786 scope.go:117] "RemoveContainer" containerID="3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.968784 4786 scope.go:117] "RemoveContainer" containerID="612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.986886 4786 scope.go:117] "RemoveContainer" containerID="0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.987485 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": container with ID starting with 0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795 not found: ID does not exist" containerID="0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.987523 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} err="failed to get container status \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": rpc error: code = NotFound desc = could not find container \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": container with ID starting with 0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.987554 4786 scope.go:117] "RemoveContainer" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.987813 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\": container with ID starting with 550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e not found: ID does not exist" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.987975 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} err="failed to get container status \"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\": rpc error: code = NotFound desc = could not find container \"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\": container with ID starting with 550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.988074 4786 scope.go:117] "RemoveContainer" containerID="9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.988527 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\": container with ID starting with 9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c not found: ID does not exist" containerID="9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.988550 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} err="failed to get container status \"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\": rpc error: code = NotFound desc = could not find container \"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\": container with ID starting with 9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.988566 4786 scope.go:117] "RemoveContainer" containerID="093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.988961 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\": container with ID starting with 093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659 not found: ID does not exist" containerID="093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.988981 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} err="failed to get container status \"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\": rpc error: code = NotFound desc = could not find container \"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\": container with ID starting with 093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.988994 4786 scope.go:117] "RemoveContainer" containerID="49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.989509 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\": container with ID starting with 49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30 not found: ID does not exist" containerID="49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.989649 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} err="failed to get container status \"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\": rpc error: code = NotFound desc = could not find container \"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\": container with ID starting with 49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.989733 4786 scope.go:117] "RemoveContainer" containerID="597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.990055 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\": container with ID starting with 597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf not found: ID does not exist" containerID="597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.990074 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} err="failed to get container status \"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\": rpc error: code = NotFound desc = could not find container \"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\": container with ID starting with 597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.990086 4786 scope.go:117] "RemoveContainer" containerID="254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.990338 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\": container with ID starting with 254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2 not found: ID does not exist" containerID="254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.990363 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} err="failed to get container status \"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\": rpc error: code = NotFound desc = could not find container \"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\": container with ID starting with 254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.990394 4786 scope.go:117] "RemoveContainer" containerID="e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.990840 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\": container with ID starting with e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230 not found: ID does not exist" containerID="e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.990991 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} err="failed to get container status \"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\": rpc error: code = NotFound desc = could not find container \"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\": container with ID starting with e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.991090 4786 scope.go:117] "RemoveContainer" containerID="3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.991490 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\": container with ID starting with 3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e not found: ID does not exist" containerID="3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.991515 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} err="failed to get container status \"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\": rpc error: code = NotFound desc = could not find container \"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\": container with ID starting with 3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.991529 4786 scope.go:117] "RemoveContainer" containerID="612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c" Dec 09 08:55:43 crc kubenswrapper[4786]: E1209 08:55:43.991913 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\": container with ID starting with 612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c not found: ID does not exist" containerID="612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.992031 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c"} err="failed to get container status \"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\": rpc error: code = NotFound desc = could not find container \"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\": container with ID starting with 612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.992121 4786 scope.go:117] "RemoveContainer" containerID="0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.992517 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} err="failed to get container status \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": rpc error: code = NotFound desc = could not find container \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": container with ID starting with 0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.992535 4786 scope.go:117] "RemoveContainer" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.992849 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} err="failed to get container status \"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\": rpc error: code = NotFound desc = could not find container \"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\": container with ID starting with 550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.992865 4786 scope.go:117] "RemoveContainer" containerID="9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.993278 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} err="failed to get container status \"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\": rpc error: code = NotFound desc = could not find container \"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\": container with ID starting with 9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.993386 4786 scope.go:117] "RemoveContainer" containerID="093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.993893 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} err="failed to get container status \"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\": rpc error: code = NotFound desc = could not find container \"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\": container with ID starting with 093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.994014 4786 scope.go:117] "RemoveContainer" containerID="49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.994534 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} err="failed to get container status \"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\": rpc error: code = NotFound desc = could not find container \"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\": container with ID starting with 49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.994603 4786 scope.go:117] "RemoveContainer" containerID="597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.995115 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} err="failed to get container status \"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\": rpc error: code = NotFound desc = could not find container \"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\": container with ID starting with 597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.995182 4786 scope.go:117] "RemoveContainer" containerID="254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.995726 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} err="failed to get container status \"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\": rpc error: code = NotFound desc = could not find container \"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\": container with ID starting with 254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.995800 4786 scope.go:117] "RemoveContainer" containerID="e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.997818 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} err="failed to get container status \"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\": rpc error: code = NotFound desc = could not find container \"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\": container with ID starting with e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.997940 4786 scope.go:117] "RemoveContainer" containerID="3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.998369 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} err="failed to get container status \"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\": rpc error: code = NotFound desc = could not find container \"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\": container with ID starting with 3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.998390 4786 scope.go:117] "RemoveContainer" containerID="612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.999159 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c"} err="failed to get container status \"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\": rpc error: code = NotFound desc = could not find container \"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\": container with ID starting with 612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.999180 4786 scope.go:117] "RemoveContainer" containerID="0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.999543 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} err="failed to get container status \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": rpc error: code = NotFound desc = could not find container \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": container with ID starting with 0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795 not found: ID does not exist" Dec 09 08:55:43 crc kubenswrapper[4786]: I1209 08:55:43.999562 4786 scope.go:117] "RemoveContainer" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.000003 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} err="failed to get container status \"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\": rpc error: code = NotFound desc = could not find container \"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\": container with ID starting with 550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.000030 4786 scope.go:117] "RemoveContainer" containerID="9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.000375 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} err="failed to get container status \"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\": rpc error: code = NotFound desc = could not find container \"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\": container with ID starting with 9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.000443 4786 scope.go:117] "RemoveContainer" containerID="093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.000859 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} err="failed to get container status \"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\": rpc error: code = NotFound desc = could not find container \"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\": container with ID starting with 093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.000882 4786 scope.go:117] "RemoveContainer" containerID="49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.001464 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} err="failed to get container status \"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\": rpc error: code = NotFound desc = could not find container \"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\": container with ID starting with 49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.001498 4786 scope.go:117] "RemoveContainer" containerID="597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.001892 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} err="failed to get container status \"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\": rpc error: code = NotFound desc = could not find container \"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\": container with ID starting with 597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.001918 4786 scope.go:117] "RemoveContainer" containerID="254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.002226 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} err="failed to get container status \"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\": rpc error: code = NotFound desc = could not find container \"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\": container with ID starting with 254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.002252 4786 scope.go:117] "RemoveContainer" containerID="e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.002627 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} err="failed to get container status \"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\": rpc error: code = NotFound desc = could not find container \"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\": container with ID starting with e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.002652 4786 scope.go:117] "RemoveContainer" containerID="3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.002948 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} err="failed to get container status \"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\": rpc error: code = NotFound desc = could not find container \"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\": container with ID starting with 3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.002965 4786 scope.go:117] "RemoveContainer" containerID="612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.003284 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c"} err="failed to get container status \"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\": rpc error: code = NotFound desc = could not find container \"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\": container with ID starting with 612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.003304 4786 scope.go:117] "RemoveContainer" containerID="0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.003599 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} err="failed to get container status \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": rpc error: code = NotFound desc = could not find container \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": container with ID starting with 0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.003626 4786 scope.go:117] "RemoveContainer" containerID="550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.003915 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e"} err="failed to get container status \"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\": rpc error: code = NotFound desc = could not find container \"550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e\": container with ID starting with 550395f05c51e2fe4115f3e461577ebaab689ef68502025a76ca1bbeef9d6c2e not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.003932 4786 scope.go:117] "RemoveContainer" containerID="9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.004220 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c"} err="failed to get container status \"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\": rpc error: code = NotFound desc = could not find container \"9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c\": container with ID starting with 9d6087f026fb9ddc4c2ccde6c73da764ca5ef4ed562177eccb4b4bc3e351af6c not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.004241 4786 scope.go:117] "RemoveContainer" containerID="093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.004589 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659"} err="failed to get container status \"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\": rpc error: code = NotFound desc = could not find container \"093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659\": container with ID starting with 093fb88a688f9545f0517783ed0c8206b417aff977f8548722ede4d3cc50e659 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.004610 4786 scope.go:117] "RemoveContainer" containerID="49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.004912 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30"} err="failed to get container status \"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\": rpc error: code = NotFound desc = could not find container \"49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30\": container with ID starting with 49244d0cc282a173891a2f1d36c8ba6f2a2b2e6467ed0ecfbd34f816438caf30 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.004933 4786 scope.go:117] "RemoveContainer" containerID="597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.005253 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf"} err="failed to get container status \"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\": rpc error: code = NotFound desc = could not find container \"597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf\": container with ID starting with 597fb40d79194e9cf70a358b54f4a7b59c9af93dd39c169997b5907d665385cf not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.005276 4786 scope.go:117] "RemoveContainer" containerID="254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.005641 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2"} err="failed to get container status \"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\": rpc error: code = NotFound desc = could not find container \"254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2\": container with ID starting with 254f2410c69ca27acb99e6f2c65fc5daff17fe7308c3ee518989f3034a04a2e2 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.005662 4786 scope.go:117] "RemoveContainer" containerID="e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.005927 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230"} err="failed to get container status \"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\": rpc error: code = NotFound desc = could not find container \"e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230\": container with ID starting with e6f1d5380705f737a4ecb6b7c893798e4b24f8e78f95bcc2045f87b9f5265230 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.005950 4786 scope.go:117] "RemoveContainer" containerID="3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.006198 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e"} err="failed to get container status \"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\": rpc error: code = NotFound desc = could not find container \"3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e\": container with ID starting with 3b4923943d56c7bba333feab5623f60f3186d6e4ac4601159bcb44a43108774e not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.006220 4786 scope.go:117] "RemoveContainer" containerID="612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.006633 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c"} err="failed to get container status \"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\": rpc error: code = NotFound desc = could not find container \"612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c\": container with ID starting with 612509bb61ee6e0b63813bfb8137d56b871b966e28d16880bbb90d0cc408af5c not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.006654 4786 scope.go:117] "RemoveContainer" containerID="0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.006985 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795"} err="failed to get container status \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": rpc error: code = NotFound desc = could not find container \"0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795\": container with ID starting with 0a11ae322f07d05d6ba984a2a381442d220da025e318455ee37079149bdf2795 not found: ID does not exist" Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.799767 4786 generic.go:334] "Generic (PLEG): container finished" podID="5503bc13-9be4-4f5f-b150-a3f2c02eecf2" containerID="9fc3920f5639f3a57c63f0fb262db6ab58ff40f8d6909c8d55e44f30c4ea37cf" exitCode=0 Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.799879 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerDied","Data":"9fc3920f5639f3a57c63f0fb262db6ab58ff40f8d6909c8d55e44f30c4ea37cf"} Dec 09 08:55:44 crc kubenswrapper[4786]: I1209 08:55:44.800308 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerStarted","Data":"8da588eb1b83e87df35c0a2794fe5e349d2509c5fe589c64dc4d2116d92bcf59"} Dec 09 08:55:45 crc kubenswrapper[4786]: I1209 08:55:45.199785 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8ebe4be-af09-4f22-9dee-af5f7d34bccf" path="/var/lib/kubelet/pods/c8ebe4be-af09-4f22-9dee-af5f7d34bccf/volumes" Dec 09 08:55:45 crc kubenswrapper[4786]: I1209 08:55:45.809867 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerStarted","Data":"9f2ccaa0c706a90499ccd0823c328035c9e29c448ad1208132406c34c0cf195d"} Dec 09 08:55:45 crc kubenswrapper[4786]: I1209 08:55:45.809916 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerStarted","Data":"b24910b6a00fa409d1e9b9a6bb71004a0942bdeb33dd0b882ecc89665948984e"} Dec 09 08:55:45 crc kubenswrapper[4786]: I1209 08:55:45.809937 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerStarted","Data":"6ff4ddc2c7524ed40e0edde321aef7603e336cb3a34ebfbcb0b20a4db03596ff"} Dec 09 08:55:45 crc kubenswrapper[4786]: I1209 08:55:45.809950 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerStarted","Data":"3b475b14881a71590904914497678465b5e0c76ad8bff22941f750f36b2d9ba9"} Dec 09 08:55:45 crc kubenswrapper[4786]: I1209 08:55:45.809960 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerStarted","Data":"3431feb5f31fc5a54d7635f4f22ae09321e67c618fe4d7f4c72c0b3d50ad1464"} Dec 09 08:55:45 crc kubenswrapper[4786]: I1209 08:55:45.809969 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerStarted","Data":"f70e90bae57a48af0ad45b900db174c47613c5a874b1c4b26620b93acc655468"} Dec 09 08:55:47 crc kubenswrapper[4786]: I1209 08:55:47.826562 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerStarted","Data":"65170d7ecde637a701f8a30d7cbf6ea6526a0edfe9ebd77932c3ece984da42eb"} Dec 09 08:55:50 crc kubenswrapper[4786]: I1209 08:55:50.856947 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" event={"ID":"5503bc13-9be4-4f5f-b150-a3f2c02eecf2","Type":"ContainerStarted","Data":"01bc0b519edd5462ffd0333a29acc3698073436bcacca8995a22395e9f28be34"} Dec 09 08:55:50 crc kubenswrapper[4786]: I1209 08:55:50.857953 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:50 crc kubenswrapper[4786]: I1209 08:55:50.857973 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:50 crc kubenswrapper[4786]: I1209 08:55:50.857982 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:50 crc kubenswrapper[4786]: I1209 08:55:50.884803 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:50 crc kubenswrapper[4786]: I1209 08:55:50.887312 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:55:50 crc kubenswrapper[4786]: I1209 08:55:50.894777 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" podStartSLOduration=7.894757179 podStartE2EDuration="7.894757179s" podCreationTimestamp="2025-12-09 08:55:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:55:50.891685723 +0000 UTC m=+716.775306979" watchObservedRunningTime="2025-12-09 08:55:50.894757179 +0000 UTC m=+716.778378405" Dec 09 08:55:55 crc kubenswrapper[4786]: I1209 08:55:55.527894 4786 scope.go:117] "RemoveContainer" containerID="93a181a8bffa8c0699688d35605838622cbb0c39dc19d52a163484d5fddc2406" Dec 09 08:55:55 crc kubenswrapper[4786]: I1209 08:55:55.891536 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/2.log" Dec 09 08:55:56 crc kubenswrapper[4786]: I1209 08:55:56.188803 4786 scope.go:117] "RemoveContainer" containerID="78b49bec42031aef55dfe0a03cdef74583598f5415193cadf25124687a40ac47" Dec 09 08:55:56 crc kubenswrapper[4786]: E1209 08:55:56.189069 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-27hfj_openshift-multus(a0a865e2-8504-473d-a23f-fc682d053a9f)\"" pod="openshift-multus/multus-27hfj" podUID="a0a865e2-8504-473d-a23f-fc682d053a9f" Dec 09 08:56:11 crc kubenswrapper[4786]: I1209 08:56:11.188507 4786 scope.go:117] "RemoveContainer" containerID="78b49bec42031aef55dfe0a03cdef74583598f5415193cadf25124687a40ac47" Dec 09 08:56:11 crc kubenswrapper[4786]: I1209 08:56:11.996005 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-27hfj_a0a865e2-8504-473d-a23f-fc682d053a9f/kube-multus/2.log" Dec 09 08:56:11 crc kubenswrapper[4786]: I1209 08:56:11.996408 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-27hfj" event={"ID":"a0a865e2-8504-473d-a23f-fc682d053a9f","Type":"ContainerStarted","Data":"67171610460e0970f4149a878f3159a0e986807f6247c0f15706e533613b9282"} Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.060672 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv"] Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.061795 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.063851 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.109007 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv"] Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.230970 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.231264 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.231286 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rbvw\" (UniqueName: \"kubernetes.io/projected/6ecba051-bc5a-42a9-b4de-bf033c4f5491-kube-api-access-4rbvw\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.334322 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.334546 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rbvw\" (UniqueName: \"kubernetes.io/projected/6ecba051-bc5a-42a9-b4de-bf033c4f5491-kube-api-access-4rbvw\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.334672 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.335319 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.335797 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.355538 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rbvw\" (UniqueName: \"kubernetes.io/projected/6ecba051-bc5a-42a9-b4de-bf033c4f5491-kube-api-access-4rbvw\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: I1209 08:56:12.378285 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: E1209 08:56:12.409151 4786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace_6ecba051-bc5a-42a9-b4de-bf033c4f5491_0(b6b82963015e5d5ee10fe6c5c31f0215b44e54e9e99b810608d9fdb664d07804): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 08:56:12 crc kubenswrapper[4786]: E1209 08:56:12.409622 4786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace_6ecba051-bc5a-42a9-b4de-bf033c4f5491_0(b6b82963015e5d5ee10fe6c5c31f0215b44e54e9e99b810608d9fdb664d07804): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: E1209 08:56:12.409655 4786 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace_6ecba051-bc5a-42a9-b4de-bf033c4f5491_0(b6b82963015e5d5ee10fe6c5c31f0215b44e54e9e99b810608d9fdb664d07804): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:12 crc kubenswrapper[4786]: E1209 08:56:12.409706 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace(6ecba051-bc5a-42a9-b4de-bf033c4f5491)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace(6ecba051-bc5a-42a9-b4de-bf033c4f5491)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace_6ecba051-bc5a-42a9-b4de-bf033c4f5491_0(b6b82963015e5d5ee10fe6c5c31f0215b44e54e9e99b810608d9fdb664d07804): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" podUID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" Dec 09 08:56:13 crc kubenswrapper[4786]: I1209 08:56:13.001489 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:13 crc kubenswrapper[4786]: I1209 08:56:13.002086 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:13 crc kubenswrapper[4786]: E1209 08:56:13.024967 4786 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace_6ecba051-bc5a-42a9-b4de-bf033c4f5491_0(d8ac2014aa585143e43511f330616b0c5759f7bc2693af33059e144936e6179b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 09 08:56:13 crc kubenswrapper[4786]: E1209 08:56:13.025069 4786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace_6ecba051-bc5a-42a9-b4de-bf033c4f5491_0(d8ac2014aa585143e43511f330616b0c5759f7bc2693af33059e144936e6179b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:13 crc kubenswrapper[4786]: E1209 08:56:13.025118 4786 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace_6ecba051-bc5a-42a9-b4de-bf033c4f5491_0(d8ac2014aa585143e43511f330616b0c5759f7bc2693af33059e144936e6179b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:13 crc kubenswrapper[4786]: E1209 08:56:13.025217 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace(6ecba051-bc5a-42a9-b4de-bf033c4f5491)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace(6ecba051-bc5a-42a9-b4de-bf033c4f5491)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_openshift-marketplace_6ecba051-bc5a-42a9-b4de-bf033c4f5491_0(d8ac2014aa585143e43511f330616b0c5759f7bc2693af33059e144936e6179b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" podUID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" Dec 09 08:56:13 crc kubenswrapper[4786]: I1209 08:56:13.944082 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-w2wgb" Dec 09 08:56:24 crc kubenswrapper[4786]: I1209 08:56:24.187821 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:24 crc kubenswrapper[4786]: I1209 08:56:24.188749 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:24 crc kubenswrapper[4786]: I1209 08:56:24.613572 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv"] Dec 09 08:56:24 crc kubenswrapper[4786]: W1209 08:56:24.619748 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6ecba051_bc5a_42a9_b4de_bf033c4f5491.slice/crio-4403a173594913b669fee56402dc375be833b35cf9ffce8904433434633a6b58 WatchSource:0}: Error finding container 4403a173594913b669fee56402dc375be833b35cf9ffce8904433434633a6b58: Status 404 returned error can't find the container with id 4403a173594913b669fee56402dc375be833b35cf9ffce8904433434633a6b58 Dec 09 08:56:25 crc kubenswrapper[4786]: I1209 08:56:25.152664 4786 generic.go:334] "Generic (PLEG): container finished" podID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerID="d7a95618156e73823d19c4ff9a5c0b4acbc85a92c7f05d6a398c7523e3b7a105" exitCode=0 Dec 09 08:56:25 crc kubenswrapper[4786]: I1209 08:56:25.152780 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" event={"ID":"6ecba051-bc5a-42a9-b4de-bf033c4f5491","Type":"ContainerDied","Data":"d7a95618156e73823d19c4ff9a5c0b4acbc85a92c7f05d6a398c7523e3b7a105"} Dec 09 08:56:25 crc kubenswrapper[4786]: I1209 08:56:25.153033 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" event={"ID":"6ecba051-bc5a-42a9-b4de-bf033c4f5491","Type":"ContainerStarted","Data":"4403a173594913b669fee56402dc375be833b35cf9ffce8904433434633a6b58"} Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.167606 4786 generic.go:334] "Generic (PLEG): container finished" podID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerID="a56b1d01fae2246f3e0c666d36c9debd5ee4580c46e1efc84bf01a125e1fb8f1" exitCode=0 Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.167682 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" event={"ID":"6ecba051-bc5a-42a9-b4de-bf033c4f5491","Type":"ContainerDied","Data":"a56b1d01fae2246f3e0c666d36c9debd5ee4580c46e1efc84bf01a125e1fb8f1"} Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.629292 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4cvbn"] Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.630420 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.640650 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cvbn"] Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.742100 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-catalog-content\") pod \"redhat-operators-4cvbn\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.742579 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-utilities\") pod \"redhat-operators-4cvbn\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.742610 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz9xw\" (UniqueName: \"kubernetes.io/projected/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-kube-api-access-rz9xw\") pod \"redhat-operators-4cvbn\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.844090 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-catalog-content\") pod \"redhat-operators-4cvbn\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.844213 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-utilities\") pod \"redhat-operators-4cvbn\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.844315 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz9xw\" (UniqueName: \"kubernetes.io/projected/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-kube-api-access-rz9xw\") pod \"redhat-operators-4cvbn\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.845440 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-utilities\") pod \"redhat-operators-4cvbn\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.845448 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-catalog-content\") pod \"redhat-operators-4cvbn\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.866576 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz9xw\" (UniqueName: \"kubernetes.io/projected/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-kube-api-access-rz9xw\") pod \"redhat-operators-4cvbn\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:27 crc kubenswrapper[4786]: I1209 08:56:27.979816 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:28 crc kubenswrapper[4786]: I1209 08:56:28.177118 4786 generic.go:334] "Generic (PLEG): container finished" podID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerID="050f1afb9c72c55994e51571c4af26afce30bdd9561479dbcb7a8c80e3e4f4e2" exitCode=0 Dec 09 08:56:28 crc kubenswrapper[4786]: I1209 08:56:28.177186 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" event={"ID":"6ecba051-bc5a-42a9-b4de-bf033c4f5491","Type":"ContainerDied","Data":"050f1afb9c72c55994e51571c4af26afce30bdd9561479dbcb7a8c80e3e4f4e2"} Dec 09 08:56:28 crc kubenswrapper[4786]: I1209 08:56:28.228069 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4cvbn"] Dec 09 08:56:28 crc kubenswrapper[4786]: W1209 08:56:28.234607 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod285de6e2_513c_4e8a_95b7_bcd3708d8f9c.slice/crio-ce13e5ee247fe5c8d3badf7766b3a1ed74ba226049c3a40370975819f395ce98 WatchSource:0}: Error finding container ce13e5ee247fe5c8d3badf7766b3a1ed74ba226049c3a40370975819f395ce98: Status 404 returned error can't find the container with id ce13e5ee247fe5c8d3badf7766b3a1ed74ba226049c3a40370975819f395ce98 Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.184944 4786 generic.go:334] "Generic (PLEG): container finished" podID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerID="aa944e4e2670b0ea01c71cba1f6fb8dd61637100f0f4bb05b2d6ad28d56b9c2d" exitCode=0 Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.185075 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cvbn" event={"ID":"285de6e2-513c-4e8a-95b7-bcd3708d8f9c","Type":"ContainerDied","Data":"aa944e4e2670b0ea01c71cba1f6fb8dd61637100f0f4bb05b2d6ad28d56b9c2d"} Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.197986 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cvbn" event={"ID":"285de6e2-513c-4e8a-95b7-bcd3708d8f9c","Type":"ContainerStarted","Data":"ce13e5ee247fe5c8d3badf7766b3a1ed74ba226049c3a40370975819f395ce98"} Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.470890 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.568758 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-bundle\") pod \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.568931 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-util\") pod \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.569120 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4rbvw\" (UniqueName: \"kubernetes.io/projected/6ecba051-bc5a-42a9-b4de-bf033c4f5491-kube-api-access-4rbvw\") pod \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\" (UID: \"6ecba051-bc5a-42a9-b4de-bf033c4f5491\") " Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.571695 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-bundle" (OuterVolumeSpecName: "bundle") pod "6ecba051-bc5a-42a9-b4de-bf033c4f5491" (UID: "6ecba051-bc5a-42a9-b4de-bf033c4f5491"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.576938 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ecba051-bc5a-42a9-b4de-bf033c4f5491-kube-api-access-4rbvw" (OuterVolumeSpecName: "kube-api-access-4rbvw") pod "6ecba051-bc5a-42a9-b4de-bf033c4f5491" (UID: "6ecba051-bc5a-42a9-b4de-bf033c4f5491"). InnerVolumeSpecName "kube-api-access-4rbvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.590314 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-util" (OuterVolumeSpecName: "util") pod "6ecba051-bc5a-42a9-b4de-bf033c4f5491" (UID: "6ecba051-bc5a-42a9-b4de-bf033c4f5491"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.670566 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.670949 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6ecba051-bc5a-42a9-b4de-bf033c4f5491-util\") on node \"crc\" DevicePath \"\"" Dec 09 08:56:29 crc kubenswrapper[4786]: I1209 08:56:29.671038 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4rbvw\" (UniqueName: \"kubernetes.io/projected/6ecba051-bc5a-42a9-b4de-bf033c4f5491-kube-api-access-4rbvw\") on node \"crc\" DevicePath \"\"" Dec 09 08:56:30 crc kubenswrapper[4786]: I1209 08:56:30.209577 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" event={"ID":"6ecba051-bc5a-42a9-b4de-bf033c4f5491","Type":"ContainerDied","Data":"4403a173594913b669fee56402dc375be833b35cf9ffce8904433434633a6b58"} Dec 09 08:56:30 crc kubenswrapper[4786]: I1209 08:56:30.210051 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4403a173594913b669fee56402dc375be833b35cf9ffce8904433434633a6b58" Dec 09 08:56:30 crc kubenswrapper[4786]: I1209 08:56:30.209937 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv" Dec 09 08:56:31 crc kubenswrapper[4786]: I1209 08:56:31.217947 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cvbn" event={"ID":"285de6e2-513c-4e8a-95b7-bcd3708d8f9c","Type":"ContainerStarted","Data":"7d9d4cbcd7e84c14f5b78669ae61b8e0fd8c6e7686189752a8c767a538543fda"} Dec 09 08:56:31 crc kubenswrapper[4786]: I1209 08:56:31.601844 4786 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 09 08:56:34 crc kubenswrapper[4786]: I1209 08:56:34.556702 4786 generic.go:334] "Generic (PLEG): container finished" podID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerID="7d9d4cbcd7e84c14f5b78669ae61b8e0fd8c6e7686189752a8c767a538543fda" exitCode=0 Dec 09 08:56:34 crc kubenswrapper[4786]: I1209 08:56:34.556830 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cvbn" event={"ID":"285de6e2-513c-4e8a-95b7-bcd3708d8f9c","Type":"ContainerDied","Data":"7d9d4cbcd7e84c14f5b78669ae61b8e0fd8c6e7686189752a8c767a538543fda"} Dec 09 08:56:37 crc kubenswrapper[4786]: I1209 08:56:37.578403 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cvbn" event={"ID":"285de6e2-513c-4e8a-95b7-bcd3708d8f9c","Type":"ContainerStarted","Data":"5ab82df73e97d90b8aa2b1656fea0e6dfaa6029bd071c35ec6542cf3df31549a"} Dec 09 08:56:37 crc kubenswrapper[4786]: I1209 08:56:37.983625 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:37 crc kubenswrapper[4786]: I1209 08:56:37.984096 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:39 crc kubenswrapper[4786]: I1209 08:56:39.513741 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4cvbn" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="registry-server" probeResult="failure" output=< Dec 09 08:56:39 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 08:56:39 crc kubenswrapper[4786]: > Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.874455 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4cvbn" podStartSLOduration=10.702360612 podStartE2EDuration="16.874402701s" podCreationTimestamp="2025-12-09 08:56:27 +0000 UTC" firstStartedPulling="2025-12-09 08:56:29.187924109 +0000 UTC m=+755.071545335" lastFinishedPulling="2025-12-09 08:56:35.359966208 +0000 UTC m=+761.243587424" observedRunningTime="2025-12-09 08:56:37.609946403 +0000 UTC m=+763.493567639" watchObservedRunningTime="2025-12-09 08:56:43.874402701 +0000 UTC m=+769.758023917" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.876719 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd"] Dec 09 08:56:43 crc kubenswrapper[4786]: E1209 08:56:43.876958 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerName="extract" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.876980 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerName="extract" Dec 09 08:56:43 crc kubenswrapper[4786]: E1209 08:56:43.876992 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerName="pull" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.876998 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerName="pull" Dec 09 08:56:43 crc kubenswrapper[4786]: E1209 08:56:43.877010 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerName="util" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.877016 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerName="util" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.877126 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ecba051-bc5a-42a9-b4de-bf033c4f5491" containerName="extract" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.877521 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.881932 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.881935 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-6sr4v" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.889437 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd"] Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.891755 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.932030 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks"] Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.932924 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.937096 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-p5v7p" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.937168 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.951481 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg"] Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.952515 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.958969 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks"] Dec 09 08:56:43 crc kubenswrapper[4786]: I1209 08:56:43.989291 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg"] Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.047093 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c7d26aa-45ef-471d-bb48-671366e5928a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg\" (UID: \"2c7d26aa-45ef-471d-bb48-671366e5928a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.047220 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68535e7a-c972-4054-8849-58dedcf84cd0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-glsks\" (UID: \"68535e7a-c972-4054-8849-58dedcf84cd0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.047274 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68535e7a-c972-4054-8849-58dedcf84cd0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-glsks\" (UID: \"68535e7a-c972-4054-8849-58dedcf84cd0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.047301 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c7d26aa-45ef-471d-bb48-671366e5928a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg\" (UID: \"2c7d26aa-45ef-471d-bb48-671366e5928a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.047341 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmgqn\" (UniqueName: \"kubernetes.io/projected/5642938b-acf3-4128-83bb-ef2beeb1d85c-kube-api-access-bmgqn\") pod \"obo-prometheus-operator-668cf9dfbb-hwspd\" (UID: \"5642938b-acf3-4128-83bb-ef2beeb1d85c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.148577 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bmgqn\" (UniqueName: \"kubernetes.io/projected/5642938b-acf3-4128-83bb-ef2beeb1d85c-kube-api-access-bmgqn\") pod \"obo-prometheus-operator-668cf9dfbb-hwspd\" (UID: \"5642938b-acf3-4128-83bb-ef2beeb1d85c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.148633 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c7d26aa-45ef-471d-bb48-671366e5928a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg\" (UID: \"2c7d26aa-45ef-471d-bb48-671366e5928a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.148690 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68535e7a-c972-4054-8849-58dedcf84cd0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-glsks\" (UID: \"68535e7a-c972-4054-8849-58dedcf84cd0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.148709 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68535e7a-c972-4054-8849-58dedcf84cd0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-glsks\" (UID: \"68535e7a-c972-4054-8849-58dedcf84cd0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.148725 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c7d26aa-45ef-471d-bb48-671366e5928a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg\" (UID: \"2c7d26aa-45ef-471d-bb48-671366e5928a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.154942 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2c7d26aa-45ef-471d-bb48-671366e5928a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg\" (UID: \"2c7d26aa-45ef-471d-bb48-671366e5928a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.154942 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68535e7a-c972-4054-8849-58dedcf84cd0-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-glsks\" (UID: \"68535e7a-c972-4054-8849-58dedcf84cd0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.155128 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2c7d26aa-45ef-471d-bb48-671366e5928a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg\" (UID: \"2c7d26aa-45ef-471d-bb48-671366e5928a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.155162 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68535e7a-c972-4054-8849-58dedcf84cd0-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-64dd87d88d-glsks\" (UID: \"68535e7a-c972-4054-8849-58dedcf84cd0\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.260626 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.279393 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.666600 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmgqn\" (UniqueName: \"kubernetes.io/projected/5642938b-acf3-4128-83bb-ef2beeb1d85c-kube-api-access-bmgqn\") pod \"obo-prometheus-operator-668cf9dfbb-hwspd\" (UID: \"5642938b-acf3-4128-83bb-ef2beeb1d85c\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.810945 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.887392 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-kgqqn"] Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.888751 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.891909 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-zwdgq" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.892317 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.906975 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-kgqqn"] Dec 09 08:56:44 crc kubenswrapper[4786]: I1209 08:56:44.934614 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg"] Dec 09 08:56:44 crc kubenswrapper[4786]: W1209 08:56:44.960311 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c7d26aa_45ef_471d_bb48_671366e5928a.slice/crio-c7a3bd3ba9119f4884fce3a98695c52bd51f52e01b45165176413434864d12fa WatchSource:0}: Error finding container c7a3bd3ba9119f4884fce3a98695c52bd51f52e01b45165176413434864d12fa: Status 404 returned error can't find the container with id c7a3bd3ba9119f4884fce3a98695c52bd51f52e01b45165176413434864d12fa Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.016128 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkc8g\" (UniqueName: \"kubernetes.io/projected/1d1ef0df-f7b0-4499-b5c3-f0952d78f097-kube-api-access-qkc8g\") pod \"observability-operator-d8bb48f5d-kgqqn\" (UID: \"1d1ef0df-f7b0-4499-b5c3-f0952d78f097\") " pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.016177 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d1ef0df-f7b0-4499-b5c3-f0952d78f097-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-kgqqn\" (UID: \"1d1ef0df-f7b0-4499-b5c3-f0952d78f097\") " pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.117268 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkc8g\" (UniqueName: \"kubernetes.io/projected/1d1ef0df-f7b0-4499-b5c3-f0952d78f097-kube-api-access-qkc8g\") pod \"observability-operator-d8bb48f5d-kgqqn\" (UID: \"1d1ef0df-f7b0-4499-b5c3-f0952d78f097\") " pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.117354 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d1ef0df-f7b0-4499-b5c3-f0952d78f097-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-kgqqn\" (UID: \"1d1ef0df-f7b0-4499-b5c3-f0952d78f097\") " pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.130220 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d1ef0df-f7b0-4499-b5c3-f0952d78f097-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-kgqqn\" (UID: \"1d1ef0df-f7b0-4499-b5c3-f0952d78f097\") " pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.158505 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkc8g\" (UniqueName: \"kubernetes.io/projected/1d1ef0df-f7b0-4499-b5c3-f0952d78f097-kube-api-access-qkc8g\") pod \"observability-operator-d8bb48f5d-kgqqn\" (UID: \"1d1ef0df-f7b0-4499-b5c3-f0952d78f097\") " pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.212039 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.225804 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-2dt2t"] Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.226768 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.234632 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-2dt2t"] Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.273006 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-fr728" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.327938 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rszv7\" (UniqueName: \"kubernetes.io/projected/19919157-d502-47f5-9ea6-27f27a0b6742-kube-api-access-rszv7\") pod \"perses-operator-5446b9c989-2dt2t\" (UID: \"19919157-d502-47f5-9ea6-27f27a0b6742\") " pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.328017 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/19919157-d502-47f5-9ea6-27f27a0b6742-openshift-service-ca\") pod \"perses-operator-5446b9c989-2dt2t\" (UID: \"19919157-d502-47f5-9ea6-27f27a0b6742\") " pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.387216 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks"] Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.429134 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rszv7\" (UniqueName: \"kubernetes.io/projected/19919157-d502-47f5-9ea6-27f27a0b6742-kube-api-access-rszv7\") pod \"perses-operator-5446b9c989-2dt2t\" (UID: \"19919157-d502-47f5-9ea6-27f27a0b6742\") " pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.429199 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/19919157-d502-47f5-9ea6-27f27a0b6742-openshift-service-ca\") pod \"perses-operator-5446b9c989-2dt2t\" (UID: \"19919157-d502-47f5-9ea6-27f27a0b6742\") " pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.430894 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/19919157-d502-47f5-9ea6-27f27a0b6742-openshift-service-ca\") pod \"perses-operator-5446b9c989-2dt2t\" (UID: \"19919157-d502-47f5-9ea6-27f27a0b6742\") " pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.454236 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rszv7\" (UniqueName: \"kubernetes.io/projected/19919157-d502-47f5-9ea6-27f27a0b6742-kube-api-access-rszv7\") pod \"perses-operator-5446b9c989-2dt2t\" (UID: \"19919157-d502-47f5-9ea6-27f27a0b6742\") " pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.517365 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd"] Dec 09 08:56:45 crc kubenswrapper[4786]: W1209 08:56:45.523259 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5642938b_acf3_4128_83bb_ef2beeb1d85c.slice/crio-a7d80bc5289a4f818878cd385aef9518b3df2bc0cebc46eec547b91051af5aae WatchSource:0}: Error finding container a7d80bc5289a4f818878cd385aef9518b3df2bc0cebc46eec547b91051af5aae: Status 404 returned error can't find the container with id a7d80bc5289a4f818878cd385aef9518b3df2bc0cebc46eec547b91051af5aae Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.580205 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.738084 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" event={"ID":"2c7d26aa-45ef-471d-bb48-671366e5928a","Type":"ContainerStarted","Data":"c7a3bd3ba9119f4884fce3a98695c52bd51f52e01b45165176413434864d12fa"} Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.739769 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" event={"ID":"68535e7a-c972-4054-8849-58dedcf84cd0","Type":"ContainerStarted","Data":"438b333edd038ea428d066f05fcb4cbd16ad6104672e60d975f515f47f041290"} Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.741042 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd" event={"ID":"5642938b-acf3-4128-83bb-ef2beeb1d85c","Type":"ContainerStarted","Data":"a7d80bc5289a4f818878cd385aef9518b3df2bc0cebc46eec547b91051af5aae"} Dec 09 08:56:45 crc kubenswrapper[4786]: I1209 08:56:45.945385 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-kgqqn"] Dec 09 08:56:45 crc kubenswrapper[4786]: W1209 08:56:45.957257 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d1ef0df_f7b0_4499_b5c3_f0952d78f097.slice/crio-d407b165933e9f6a2ff34ee905a5da2eb7f89e9f05684257ae5955ca10259196 WatchSource:0}: Error finding container d407b165933e9f6a2ff34ee905a5da2eb7f89e9f05684257ae5955ca10259196: Status 404 returned error can't find the container with id d407b165933e9f6a2ff34ee905a5da2eb7f89e9f05684257ae5955ca10259196 Dec 09 08:56:46 crc kubenswrapper[4786]: I1209 08:56:46.031482 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-2dt2t"] Dec 09 08:56:46 crc kubenswrapper[4786]: W1209 08:56:46.049184 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19919157_d502_47f5_9ea6_27f27a0b6742.slice/crio-fbc28e82904551aaa0c0682bb9e7d26ea5390e999e6f2a40102db56e17d80bdf WatchSource:0}: Error finding container fbc28e82904551aaa0c0682bb9e7d26ea5390e999e6f2a40102db56e17d80bdf: Status 404 returned error can't find the container with id fbc28e82904551aaa0c0682bb9e7d26ea5390e999e6f2a40102db56e17d80bdf Dec 09 08:56:46 crc kubenswrapper[4786]: I1209 08:56:46.767722 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" event={"ID":"1d1ef0df-f7b0-4499-b5c3-f0952d78f097","Type":"ContainerStarted","Data":"d407b165933e9f6a2ff34ee905a5da2eb7f89e9f05684257ae5955ca10259196"} Dec 09 08:56:46 crc kubenswrapper[4786]: I1209 08:56:46.781362 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-2dt2t" event={"ID":"19919157-d502-47f5-9ea6-27f27a0b6742","Type":"ContainerStarted","Data":"fbc28e82904551aaa0c0682bb9e7d26ea5390e999e6f2a40102db56e17d80bdf"} Dec 09 08:56:49 crc kubenswrapper[4786]: I1209 08:56:49.101439 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4cvbn" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="registry-server" probeResult="failure" output=< Dec 09 08:56:49 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 08:56:49 crc kubenswrapper[4786]: > Dec 09 08:56:54 crc kubenswrapper[4786]: I1209 08:56:54.989367 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:56:54 crc kubenswrapper[4786]: I1209 08:56:54.990373 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:56:58 crc kubenswrapper[4786]: I1209 08:56:58.254977 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:58 crc kubenswrapper[4786]: I1209 08:56:58.359697 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:56:59 crc kubenswrapper[4786]: I1209 08:56:59.906655 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cvbn"] Dec 09 08:57:00 crc kubenswrapper[4786]: I1209 08:57:00.217281 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4cvbn" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="registry-server" containerID="cri-o://5ab82df73e97d90b8aa2b1656fea0e6dfaa6029bd071c35ec6542cf3df31549a" gracePeriod=2 Dec 09 08:57:01 crc kubenswrapper[4786]: I1209 08:57:01.226554 4786 generic.go:334] "Generic (PLEG): container finished" podID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerID="5ab82df73e97d90b8aa2b1656fea0e6dfaa6029bd071c35ec6542cf3df31549a" exitCode=0 Dec 09 08:57:01 crc kubenswrapper[4786]: I1209 08:57:01.226647 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cvbn" event={"ID":"285de6e2-513c-4e8a-95b7-bcd3708d8f9c","Type":"ContainerDied","Data":"5ab82df73e97d90b8aa2b1656fea0e6dfaa6029bd071c35ec6542cf3df31549a"} Dec 09 08:57:07 crc kubenswrapper[4786]: E1209 08:57:07.064722 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 09 08:57:07 crc kubenswrapper[4786]: E1209 08:57:07.065384 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg_openshift-operators(2c7d26aa-45ef-471d-bb48-671366e5928a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 08:57:07 crc kubenswrapper[4786]: E1209 08:57:07.066606 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" podUID="2c7d26aa-45ef-471d-bb48-671366e5928a" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.134065 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.256731 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-utilities\") pod \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.257091 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-catalog-content\") pod \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.257246 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz9xw\" (UniqueName: \"kubernetes.io/projected/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-kube-api-access-rz9xw\") pod \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\" (UID: \"285de6e2-513c-4e8a-95b7-bcd3708d8f9c\") " Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.257780 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-utilities" (OuterVolumeSpecName: "utilities") pod "285de6e2-513c-4e8a-95b7-bcd3708d8f9c" (UID: "285de6e2-513c-4e8a-95b7-bcd3708d8f9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.261732 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.267973 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4cvbn" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.268273 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4cvbn" event={"ID":"285de6e2-513c-4e8a-95b7-bcd3708d8f9c","Type":"ContainerDied","Data":"ce13e5ee247fe5c8d3badf7766b3a1ed74ba226049c3a40370975819f395ce98"} Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.268612 4786 scope.go:117] "RemoveContainer" containerID="5ab82df73e97d90b8aa2b1656fea0e6dfaa6029bd071c35ec6542cf3df31549a" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.269191 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-kube-api-access-rz9xw" (OuterVolumeSpecName: "kube-api-access-rz9xw") pod "285de6e2-513c-4e8a-95b7-bcd3708d8f9c" (UID: "285de6e2-513c-4e8a-95b7-bcd3708d8f9c"). InnerVolumeSpecName "kube-api-access-rz9xw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.305565 4786 scope.go:117] "RemoveContainer" containerID="7d9d4cbcd7e84c14f5b78669ae61b8e0fd8c6e7686189752a8c767a538543fda" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.326872 4786 scope.go:117] "RemoveContainer" containerID="aa944e4e2670b0ea01c71cba1f6fb8dd61637100f0f4bb05b2d6ad28d56b9c2d" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.367250 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz9xw\" (UniqueName: \"kubernetes.io/projected/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-kube-api-access-rz9xw\") on node \"crc\" DevicePath \"\"" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.457795 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "285de6e2-513c-4e8a-95b7-bcd3708d8f9c" (UID: "285de6e2-513c-4e8a-95b7-bcd3708d8f9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.468577 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/285de6e2-513c-4e8a-95b7-bcd3708d8f9c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.595991 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4cvbn"] Dec 09 08:57:07 crc kubenswrapper[4786]: I1209 08:57:07.599487 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4cvbn"] Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.275157 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd" event={"ID":"5642938b-acf3-4128-83bb-ef2beeb1d85c","Type":"ContainerStarted","Data":"a3fb31e99e8026c79323ef94bff6f96043b1502fc0b56aacead353b9ac251c93"} Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.278350 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" event={"ID":"2c7d26aa-45ef-471d-bb48-671366e5928a","Type":"ContainerStarted","Data":"bef3302d8fb431060e194596f7a9d9a98263657ccb44a807816fc8f361f8af4c"} Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.279986 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" event={"ID":"1d1ef0df-f7b0-4499-b5c3-f0952d78f097","Type":"ContainerStarted","Data":"e1b753743081608bcf0c9d7226cfc99fadcd6661afcf502144de9073e8ee6e2f"} Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.280194 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.281521 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" event={"ID":"68535e7a-c972-4054-8849-58dedcf84cd0","Type":"ContainerStarted","Data":"8d43a32af87c2543978e9a284bd4c70773efca5d083fa61e7654d6fbdfddf772"} Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.284566 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-2dt2t" event={"ID":"19919157-d502-47f5-9ea6-27f27a0b6742","Type":"ContainerStarted","Data":"f245e6c83dc89d89af899bff2880d62a55f808b51b4555cafaa334125a787e71"} Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.284840 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.291454 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.298848 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-hwspd" podStartSLOduration=3.697211185 podStartE2EDuration="25.298822027s" podCreationTimestamp="2025-12-09 08:56:43 +0000 UTC" firstStartedPulling="2025-12-09 08:56:45.528620199 +0000 UTC m=+771.412241425" lastFinishedPulling="2025-12-09 08:57:07.130231041 +0000 UTC m=+793.013852267" observedRunningTime="2025-12-09 08:57:08.29335916 +0000 UTC m=+794.176980396" watchObservedRunningTime="2025-12-09 08:57:08.298822027 +0000 UTC m=+794.182443283" Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.327751 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-2dt2t" podStartSLOduration=2.239114017 podStartE2EDuration="23.327731331s" podCreationTimestamp="2025-12-09 08:56:45 +0000 UTC" firstStartedPulling="2025-12-09 08:56:46.059442994 +0000 UTC m=+771.943064220" lastFinishedPulling="2025-12-09 08:57:07.148060308 +0000 UTC m=+793.031681534" observedRunningTime="2025-12-09 08:57:08.321882213 +0000 UTC m=+794.205503439" watchObservedRunningTime="2025-12-09 08:57:08.327731331 +0000 UTC m=+794.211352557" Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.339908 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg" podStartSLOduration=-9223372011.514887 podStartE2EDuration="25.339888314s" podCreationTimestamp="2025-12-09 08:56:43 +0000 UTC" firstStartedPulling="2025-12-09 08:56:44.974072608 +0000 UTC m=+770.857693834" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:57:08.337934646 +0000 UTC m=+794.221555902" watchObservedRunningTime="2025-12-09 08:57:08.339888314 +0000 UTC m=+794.223509540" Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.368006 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-64dd87d88d-glsks" podStartSLOduration=3.702759641 podStartE2EDuration="25.367985927s" podCreationTimestamp="2025-12-09 08:56:43 +0000 UTC" firstStartedPulling="2025-12-09 08:56:45.449056097 +0000 UTC m=+771.332677323" lastFinishedPulling="2025-12-09 08:57:07.114282383 +0000 UTC m=+792.997903609" observedRunningTime="2025-12-09 08:57:08.367613278 +0000 UTC m=+794.251234504" watchObservedRunningTime="2025-12-09 08:57:08.367985927 +0000 UTC m=+794.251607153" Dec 09 08:57:08 crc kubenswrapper[4786]: I1209 08:57:08.385482 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-kgqqn" podStartSLOduration=3.198831586 podStartE2EDuration="24.385457965s" podCreationTimestamp="2025-12-09 08:56:44 +0000 UTC" firstStartedPulling="2025-12-09 08:56:45.963053019 +0000 UTC m=+771.846674245" lastFinishedPulling="2025-12-09 08:57:07.149679398 +0000 UTC m=+793.033300624" observedRunningTime="2025-12-09 08:57:08.385159988 +0000 UTC m=+794.268781214" watchObservedRunningTime="2025-12-09 08:57:08.385457965 +0000 UTC m=+794.269079221" Dec 09 08:57:09 crc kubenswrapper[4786]: I1209 08:57:09.198645 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" path="/var/lib/kubelet/pods/285de6e2-513c-4e8a-95b7-bcd3708d8f9c/volumes" Dec 09 08:57:15 crc kubenswrapper[4786]: I1209 08:57:15.582629 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-2dt2t" Dec 09 08:57:24 crc kubenswrapper[4786]: I1209 08:57:24.988973 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:57:24 crc kubenswrapper[4786]: I1209 08:57:24.989538 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.573824 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6"] Dec 09 08:57:33 crc kubenswrapper[4786]: E1209 08:57:33.574984 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="extract-utilities" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.575005 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="extract-utilities" Dec 09 08:57:33 crc kubenswrapper[4786]: E1209 08:57:33.575018 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="registry-server" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.575025 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="registry-server" Dec 09 08:57:33 crc kubenswrapper[4786]: E1209 08:57:33.575048 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="extract-content" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.575055 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="extract-content" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.575181 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="285de6e2-513c-4e8a-95b7-bcd3708d8f9c" containerName="registry-server" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.576408 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.579571 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.592644 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6"] Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.695134 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b928\" (UniqueName: \"kubernetes.io/projected/551ed98a-59ee-48a6-aec6-e02f7889d395-kube-api-access-9b928\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.695201 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.695246 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.797610 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b928\" (UniqueName: \"kubernetes.io/projected/551ed98a-59ee-48a6-aec6-e02f7889d395-kube-api-access-9b928\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.797687 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.797738 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.798309 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.798383 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.817396 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b928\" (UniqueName: \"kubernetes.io/projected/551ed98a-59ee-48a6-aec6-e02f7889d395-kube-api-access-9b928\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:33 crc kubenswrapper[4786]: I1209 08:57:33.898980 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:34 crc kubenswrapper[4786]: I1209 08:57:34.454900 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6"] Dec 09 08:57:34 crc kubenswrapper[4786]: I1209 08:57:34.520721 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" event={"ID":"551ed98a-59ee-48a6-aec6-e02f7889d395","Type":"ContainerStarted","Data":"7c55d2f0e761708f85698a191ada93439b845ead075c6d118f8b65979c6a51ff"} Dec 09 08:57:35 crc kubenswrapper[4786]: I1209 08:57:35.532313 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" event={"ID":"551ed98a-59ee-48a6-aec6-e02f7889d395","Type":"ContainerStarted","Data":"0eff0290bbb8b8fc5165b57eecc53640d4253c9da70bb9b41be97a17179cd290"} Dec 09 08:57:36 crc kubenswrapper[4786]: I1209 08:57:36.538986 4786 generic.go:334] "Generic (PLEG): container finished" podID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerID="0eff0290bbb8b8fc5165b57eecc53640d4253c9da70bb9b41be97a17179cd290" exitCode=0 Dec 09 08:57:36 crc kubenswrapper[4786]: I1209 08:57:36.539095 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" event={"ID":"551ed98a-59ee-48a6-aec6-e02f7889d395","Type":"ContainerDied","Data":"0eff0290bbb8b8fc5165b57eecc53640d4253c9da70bb9b41be97a17179cd290"} Dec 09 08:57:41 crc kubenswrapper[4786]: I1209 08:57:41.565245 4786 generic.go:334] "Generic (PLEG): container finished" podID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerID="ef07d91f0445205a19385956460b291c15bf774ff8925a6fa52e215802cdb9b2" exitCode=0 Dec 09 08:57:41 crc kubenswrapper[4786]: I1209 08:57:41.565310 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" event={"ID":"551ed98a-59ee-48a6-aec6-e02f7889d395","Type":"ContainerDied","Data":"ef07d91f0445205a19385956460b291c15bf774ff8925a6fa52e215802cdb9b2"} Dec 09 08:57:42 crc kubenswrapper[4786]: I1209 08:57:42.582112 4786 generic.go:334] "Generic (PLEG): container finished" podID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerID="0a533c14b2d14fe72921aea9e77a913ba81f2ba38074a6ccd3ce94a0c3644e7a" exitCode=0 Dec 09 08:57:42 crc kubenswrapper[4786]: I1209 08:57:42.582219 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" event={"ID":"551ed98a-59ee-48a6-aec6-e02f7889d395","Type":"ContainerDied","Data":"0a533c14b2d14fe72921aea9e77a913ba81f2ba38074a6ccd3ce94a0c3644e7a"} Dec 09 08:57:43 crc kubenswrapper[4786]: I1209 08:57:43.875954 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.031180 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-bundle\") pod \"551ed98a-59ee-48a6-aec6-e02f7889d395\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.031246 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-util\") pod \"551ed98a-59ee-48a6-aec6-e02f7889d395\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.031292 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b928\" (UniqueName: \"kubernetes.io/projected/551ed98a-59ee-48a6-aec6-e02f7889d395-kube-api-access-9b928\") pod \"551ed98a-59ee-48a6-aec6-e02f7889d395\" (UID: \"551ed98a-59ee-48a6-aec6-e02f7889d395\") " Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.035872 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-bundle" (OuterVolumeSpecName: "bundle") pod "551ed98a-59ee-48a6-aec6-e02f7889d395" (UID: "551ed98a-59ee-48a6-aec6-e02f7889d395"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.043306 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/551ed98a-59ee-48a6-aec6-e02f7889d395-kube-api-access-9b928" (OuterVolumeSpecName: "kube-api-access-9b928") pod "551ed98a-59ee-48a6-aec6-e02f7889d395" (UID: "551ed98a-59ee-48a6-aec6-e02f7889d395"). InnerVolumeSpecName "kube-api-access-9b928". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.045594 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-util" (OuterVolumeSpecName: "util") pod "551ed98a-59ee-48a6-aec6-e02f7889d395" (UID: "551ed98a-59ee-48a6-aec6-e02f7889d395"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.133417 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.133478 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/551ed98a-59ee-48a6-aec6-e02f7889d395-util\") on node \"crc\" DevicePath \"\"" Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.133488 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b928\" (UniqueName: \"kubernetes.io/projected/551ed98a-59ee-48a6-aec6-e02f7889d395-kube-api-access-9b928\") on node \"crc\" DevicePath \"\"" Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.595398 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" event={"ID":"551ed98a-59ee-48a6-aec6-e02f7889d395","Type":"ContainerDied","Data":"7c55d2f0e761708f85698a191ada93439b845ead075c6d118f8b65979c6a51ff"} Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.595464 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c55d2f0e761708f85698a191ada93439b845ead075c6d118f8b65979c6a51ff" Dec 09 08:57:44 crc kubenswrapper[4786]: I1209 08:57:44.595542 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.046447 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt"] Dec 09 08:57:50 crc kubenswrapper[4786]: E1209 08:57:50.047277 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerName="pull" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.047306 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerName="pull" Dec 09 08:57:50 crc kubenswrapper[4786]: E1209 08:57:50.047322 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerName="extract" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.047330 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerName="extract" Dec 09 08:57:50 crc kubenswrapper[4786]: E1209 08:57:50.047368 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerName="util" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.047377 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerName="util" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.047595 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="551ed98a-59ee-48a6-aec6-e02f7889d395" containerName="extract" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.048486 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.051388 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.051388 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.051870 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-xccx5" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.057993 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt"] Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.215797 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vs4hq\" (UniqueName: \"kubernetes.io/projected/79114f82-3f7e-40ea-b197-051c986d3070-kube-api-access-vs4hq\") pod \"nmstate-operator-5b5b58f5c8-fvtvt\" (UID: \"79114f82-3f7e-40ea-b197-051c986d3070\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.317885 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vs4hq\" (UniqueName: \"kubernetes.io/projected/79114f82-3f7e-40ea-b197-051c986d3070-kube-api-access-vs4hq\") pod \"nmstate-operator-5b5b58f5c8-fvtvt\" (UID: \"79114f82-3f7e-40ea-b197-051c986d3070\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.346839 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vs4hq\" (UniqueName: \"kubernetes.io/projected/79114f82-3f7e-40ea-b197-051c986d3070-kube-api-access-vs4hq\") pod \"nmstate-operator-5b5b58f5c8-fvtvt\" (UID: \"79114f82-3f7e-40ea-b197-051c986d3070\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.366801 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt" Dec 09 08:57:50 crc kubenswrapper[4786]: I1209 08:57:50.747173 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt"] Dec 09 08:57:51 crc kubenswrapper[4786]: I1209 08:57:51.658040 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt" event={"ID":"79114f82-3f7e-40ea-b197-051c986d3070","Type":"ContainerStarted","Data":"791390a537ac3a6b7b6766b4c7590237de1207dab865aba680e6e11818748cbf"} Dec 09 08:57:54 crc kubenswrapper[4786]: I1209 08:57:54.988993 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 08:57:54 crc kubenswrapper[4786]: I1209 08:57:54.989330 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 08:57:54 crc kubenswrapper[4786]: I1209 08:57:54.989453 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 08:57:54 crc kubenswrapper[4786]: I1209 08:57:54.990087 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1a37e915d2cb26d1e193f254aff1e4395db9084078cc5d3a08303be12c3b59b"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 08:57:54 crc kubenswrapper[4786]: I1209 08:57:54.990157 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://c1a37e915d2cb26d1e193f254aff1e4395db9084078cc5d3a08303be12c3b59b" gracePeriod=600 Dec 09 08:57:55 crc kubenswrapper[4786]: I1209 08:57:55.686863 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="c1a37e915d2cb26d1e193f254aff1e4395db9084078cc5d3a08303be12c3b59b" exitCode=0 Dec 09 08:57:55 crc kubenswrapper[4786]: I1209 08:57:55.687370 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"c1a37e915d2cb26d1e193f254aff1e4395db9084078cc5d3a08303be12c3b59b"} Dec 09 08:57:55 crc kubenswrapper[4786]: I1209 08:57:55.687416 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"23158cb54ec78bf37e1faebd995cbf384dcd1c26c5f2777f244e3c9e75de0774"} Dec 09 08:57:55 crc kubenswrapper[4786]: I1209 08:57:55.687455 4786 scope.go:117] "RemoveContainer" containerID="691b25d832c85f5d738ce211194caa8f083f24ddcca29e3361b0baf6122f9bf9" Dec 09 08:57:57 crc kubenswrapper[4786]: I1209 08:57:57.715344 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt" event={"ID":"79114f82-3f7e-40ea-b197-051c986d3070","Type":"ContainerStarted","Data":"b0aae113875c3cf851f8c0ab202cffcb9e26f7f171c0eefe03a4dbad931be02e"} Dec 09 08:57:57 crc kubenswrapper[4786]: I1209 08:57:57.736895 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-fvtvt" podStartSLOduration=1.6849506060000001 podStartE2EDuration="7.736868112s" podCreationTimestamp="2025-12-09 08:57:50 +0000 UTC" firstStartedPulling="2025-12-09 08:57:50.781255101 +0000 UTC m=+836.664876327" lastFinishedPulling="2025-12-09 08:57:56.833172607 +0000 UTC m=+842.716793833" observedRunningTime="2025-12-09 08:57:57.733307864 +0000 UTC m=+843.616929110" watchObservedRunningTime="2025-12-09 08:57:57.736868112 +0000 UTC m=+843.620489348" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.773591 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-679bk"] Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.774980 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-679bk" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.785782 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-22m22" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.808068 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-679bk"] Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.812138 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-cgcz2"] Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.814019 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.842157 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbtgx\" (UniqueName: \"kubernetes.io/projected/3d5f00bd-8538-4255-8012-736caf10840a-kube-api-access-gbtgx\") pod \"nmstate-metrics-7f946cbc9-679bk\" (UID: \"3d5f00bd-8538-4255-8012-736caf10840a\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-679bk" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.847550 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4"] Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.848322 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.851396 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.879831 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4"] Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.944343 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d6bacb3d-0915-4228-979a-ea9b6d283ff7-ovs-socket\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.944440 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbtgx\" (UniqueName: \"kubernetes.io/projected/3d5f00bd-8538-4255-8012-736caf10840a-kube-api-access-gbtgx\") pod \"nmstate-metrics-7f946cbc9-679bk\" (UID: \"3d5f00bd-8538-4255-8012-736caf10840a\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-679bk" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.944492 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ng9z\" (UniqueName: \"kubernetes.io/projected/cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e-kube-api-access-2ng9z\") pod \"nmstate-webhook-5f6d4c5ccb-gszq4\" (UID: \"cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.944516 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d6bacb3d-0915-4228-979a-ea9b6d283ff7-dbus-socket\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.944580 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-gszq4\" (UID: \"cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.944603 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mszj2\" (UniqueName: \"kubernetes.io/projected/d6bacb3d-0915-4228-979a-ea9b6d283ff7-kube-api-access-mszj2\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.944668 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d6bacb3d-0915-4228-979a-ea9b6d283ff7-nmstate-lock\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.970536 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbtgx\" (UniqueName: \"kubernetes.io/projected/3d5f00bd-8538-4255-8012-736caf10840a-kube-api-access-gbtgx\") pod \"nmstate-metrics-7f946cbc9-679bk\" (UID: \"3d5f00bd-8538-4255-8012-736caf10840a\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-679bk" Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.995979 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp"] Dec 09 08:57:58 crc kubenswrapper[4786]: I1209 08:57:58.996745 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.000620 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.001691 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-djplq" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.002950 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.010121 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp"] Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.046149 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-gszq4\" (UID: \"cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.046194 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mszj2\" (UniqueName: \"kubernetes.io/projected/d6bacb3d-0915-4228-979a-ea9b6d283ff7-kube-api-access-mszj2\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.046219 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d6bacb3d-0915-4228-979a-ea9b6d283ff7-nmstate-lock\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.046275 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d6bacb3d-0915-4228-979a-ea9b6d283ff7-ovs-socket\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.046308 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ng9z\" (UniqueName: \"kubernetes.io/projected/cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e-kube-api-access-2ng9z\") pod \"nmstate-webhook-5f6d4c5ccb-gszq4\" (UID: \"cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.046330 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d6bacb3d-0915-4228-979a-ea9b6d283ff7-dbus-socket\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.046452 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/d6bacb3d-0915-4228-979a-ea9b6d283ff7-nmstate-lock\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.046559 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/d6bacb3d-0915-4228-979a-ea9b6d283ff7-ovs-socket\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.046705 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/d6bacb3d-0915-4228-979a-ea9b6d283ff7-dbus-socket\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.051185 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-gszq4\" (UID: \"cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.069295 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ng9z\" (UniqueName: \"kubernetes.io/projected/cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e-kube-api-access-2ng9z\") pod \"nmstate-webhook-5f6d4c5ccb-gszq4\" (UID: \"cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.083592 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mszj2\" (UniqueName: \"kubernetes.io/projected/d6bacb3d-0915-4228-979a-ea9b6d283ff7-kube-api-access-mszj2\") pod \"nmstate-handler-cgcz2\" (UID: \"d6bacb3d-0915-4228-979a-ea9b6d283ff7\") " pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.096851 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-679bk" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.136875 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.149563 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.149649 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ft82\" (UniqueName: \"kubernetes.io/projected/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-kube-api-access-2ft82\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.149672 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.170900 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.251631 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.252207 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ft82\" (UniqueName: \"kubernetes.io/projected/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-kube-api-access-2ft82\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.252254 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: E1209 08:57:59.266786 4786 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 09 08:57:59 crc kubenswrapper[4786]: E1209 08:57:59.266898 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-plugin-serving-cert podName:21a68ddc-31de-4083-ac88-bdf6ffd0afa7 nodeName:}" failed. No retries permitted until 2025-12-09 08:57:59.766873452 +0000 UTC m=+845.650494678 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-z52fp" (UID: "21a68ddc-31de-4083-ac88-bdf6ffd0afa7") : secret "plugin-serving-cert" not found Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.267433 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.330148 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ft82\" (UniqueName: \"kubernetes.io/projected/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-kube-api-access-2ft82\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.371539 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7f6f866d9d-99v2z"] Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.372681 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.397794 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f6f866d9d-99v2z"] Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.467503 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt4bf\" (UniqueName: \"kubernetes.io/projected/c7bf5a91-01cd-49d1-8961-62aed2d06db7-kube-api-access-wt4bf\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.468771 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7bf5a91-01cd-49d1-8961-62aed2d06db7-console-serving-cert\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.468821 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-service-ca\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.468868 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-oauth-serving-cert\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.468893 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7bf5a91-01cd-49d1-8961-62aed2d06db7-console-oauth-config\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.468926 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-console-config\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.468956 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-trusted-ca-bundle\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.578283 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7bf5a91-01cd-49d1-8961-62aed2d06db7-console-serving-cert\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.578660 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-service-ca\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.578682 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-oauth-serving-cert\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.578707 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7bf5a91-01cd-49d1-8961-62aed2d06db7-console-oauth-config\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.578745 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-console-config\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.578781 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-trusted-ca-bundle\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.578831 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt4bf\" (UniqueName: \"kubernetes.io/projected/c7bf5a91-01cd-49d1-8961-62aed2d06db7-kube-api-access-wt4bf\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.580032 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-console-config\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.580132 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-service-ca\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.580930 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-oauth-serving-cert\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.582057 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7bf5a91-01cd-49d1-8961-62aed2d06db7-trusted-ca-bundle\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.618722 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7bf5a91-01cd-49d1-8961-62aed2d06db7-console-serving-cert\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.624204 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7bf5a91-01cd-49d1-8961-62aed2d06db7-console-oauth-config\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.634891 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt4bf\" (UniqueName: \"kubernetes.io/projected/c7bf5a91-01cd-49d1-8961-62aed2d06db7-kube-api-access-wt4bf\") pod \"console-7f6f866d9d-99v2z\" (UID: \"c7bf5a91-01cd-49d1-8961-62aed2d06db7\") " pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.671624 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-679bk"] Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.728028 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4"] Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.729837 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-679bk" event={"ID":"3d5f00bd-8538-4255-8012-736caf10840a","Type":"ContainerStarted","Data":"139dbdf470105394e603abcf85eca0d06dd445a139061e9312399ad0bed8d714"} Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.730822 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cgcz2" event={"ID":"d6bacb3d-0915-4228-979a-ea9b6d283ff7","Type":"ContainerStarted","Data":"1c116f9d4e59f15e883490e3927357bdad5723f805039cb68a5db8acd1a41277"} Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.781932 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.785187 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/21a68ddc-31de-4083-ac88-bdf6ffd0afa7-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-z52fp\" (UID: \"21a68ddc-31de-4083-ac88-bdf6ffd0afa7\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.812219 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:57:59 crc kubenswrapper[4786]: I1209 08:57:59.923135 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" Dec 09 08:58:00 crc kubenswrapper[4786]: I1209 08:58:00.171281 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp"] Dec 09 08:58:00 crc kubenswrapper[4786]: I1209 08:58:00.254730 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7f6f866d9d-99v2z"] Dec 09 08:58:00 crc kubenswrapper[4786]: I1209 08:58:00.740498 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" event={"ID":"cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e","Type":"ContainerStarted","Data":"b7d23dafa066d9dfd89118cdf3f03dff46224640c4feb83df3d15ea0c1d0bbbe"} Dec 09 08:58:00 crc kubenswrapper[4786]: I1209 08:58:00.741618 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" event={"ID":"21a68ddc-31de-4083-ac88-bdf6ffd0afa7","Type":"ContainerStarted","Data":"0baa8050715fed73cf26d1a5c2c27bf1611d9d10442cb9459088bee063e0d679"} Dec 09 08:58:00 crc kubenswrapper[4786]: I1209 08:58:00.743359 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f6f866d9d-99v2z" event={"ID":"c7bf5a91-01cd-49d1-8961-62aed2d06db7","Type":"ContainerStarted","Data":"df45141cb9f2cd6f269c242d558174301e6a27bfe56a82aae15a793bdb8f3017"} Dec 09 08:58:01 crc kubenswrapper[4786]: I1209 08:58:01.751981 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7f6f866d9d-99v2z" event={"ID":"c7bf5a91-01cd-49d1-8961-62aed2d06db7","Type":"ContainerStarted","Data":"39b0f17e0aa455889ba1cd31459848eca504f39777156423928c9bfc5a55d65c"} Dec 09 08:58:04 crc kubenswrapper[4786]: I1209 08:58:04.788861 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" event={"ID":"cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e","Type":"ContainerStarted","Data":"e229aa4369b1be0aea5e825a262c9eedc4875545c611f37e4e8d27f066a54abb"} Dec 09 08:58:04 crc kubenswrapper[4786]: I1209 08:58:04.789325 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:58:04 crc kubenswrapper[4786]: I1209 08:58:04.791315 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" event={"ID":"21a68ddc-31de-4083-ac88-bdf6ffd0afa7","Type":"ContainerStarted","Data":"ee0a51f139abecb890b61e6b7e48ca8558fdf0e4dde01567c09707b198ca32d2"} Dec 09 08:58:04 crc kubenswrapper[4786]: I1209 08:58:04.809780 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" podStartSLOduration=2.467012847 podStartE2EDuration="6.809758509s" podCreationTimestamp="2025-12-09 08:57:58 +0000 UTC" firstStartedPulling="2025-12-09 08:57:59.734130726 +0000 UTC m=+845.617751952" lastFinishedPulling="2025-12-09 08:58:04.076876388 +0000 UTC m=+849.960497614" observedRunningTime="2025-12-09 08:58:04.802964339 +0000 UTC m=+850.686585565" watchObservedRunningTime="2025-12-09 08:58:04.809758509 +0000 UTC m=+850.693379745" Dec 09 08:58:04 crc kubenswrapper[4786]: I1209 08:58:04.810121 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7f6f866d9d-99v2z" podStartSLOduration=5.8101149880000005 podStartE2EDuration="5.810114988s" podCreationTimestamp="2025-12-09 08:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:58:01.861633229 +0000 UTC m=+847.745254455" watchObservedRunningTime="2025-12-09 08:58:04.810114988 +0000 UTC m=+850.693736234" Dec 09 08:58:04 crc kubenswrapper[4786]: I1209 08:58:04.832580 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-z52fp" podStartSLOduration=2.9458832409999998 podStartE2EDuration="6.832557729s" podCreationTimestamp="2025-12-09 08:57:58 +0000 UTC" firstStartedPulling="2025-12-09 08:58:00.192174929 +0000 UTC m=+846.075796155" lastFinishedPulling="2025-12-09 08:58:04.078849417 +0000 UTC m=+849.962470643" observedRunningTime="2025-12-09 08:58:04.830802886 +0000 UTC m=+850.714424122" watchObservedRunningTime="2025-12-09 08:58:04.832557729 +0000 UTC m=+850.716178955" Dec 09 08:58:07 crc kubenswrapper[4786]: I1209 08:58:07.819344 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-679bk" event={"ID":"3d5f00bd-8538-4255-8012-736caf10840a","Type":"ContainerStarted","Data":"9fc5c2762e9b3a8769a253c1036210845a7147ce3d436161431dcb9b3f8d2131"} Dec 09 08:58:07 crc kubenswrapper[4786]: I1209 08:58:07.820754 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-cgcz2" event={"ID":"d6bacb3d-0915-4228-979a-ea9b6d283ff7","Type":"ContainerStarted","Data":"802b7b6f099bf46765173ac6f86ee7608d988d9b76bec90785a36223eea3fadc"} Dec 09 08:58:07 crc kubenswrapper[4786]: I1209 08:58:07.820943 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:58:07 crc kubenswrapper[4786]: I1209 08:58:07.845520 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-cgcz2" podStartSLOduration=2.298361936 podStartE2EDuration="9.845486691s" podCreationTimestamp="2025-12-09 08:57:58 +0000 UTC" firstStartedPulling="2025-12-09 08:57:59.244880922 +0000 UTC m=+845.128502138" lastFinishedPulling="2025-12-09 08:58:06.792005667 +0000 UTC m=+852.675626893" observedRunningTime="2025-12-09 08:58:07.836101726 +0000 UTC m=+853.719722962" watchObservedRunningTime="2025-12-09 08:58:07.845486691 +0000 UTC m=+853.729107927" Dec 09 08:58:09 crc kubenswrapper[4786]: I1209 08:58:09.812815 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:58:09 crc kubenswrapper[4786]: I1209 08:58:09.813778 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:58:09 crc kubenswrapper[4786]: I1209 08:58:09.819527 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:58:09 crc kubenswrapper[4786]: I1209 08:58:09.859186 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-679bk" event={"ID":"3d5f00bd-8538-4255-8012-736caf10840a","Type":"ContainerStarted","Data":"5d397cee098a92f9437138a5b77d482ddddb394fb8b9916e2f97210dd9d98dad"} Dec 09 08:58:09 crc kubenswrapper[4786]: I1209 08:58:09.863140 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7f6f866d9d-99v2z" Dec 09 08:58:09 crc kubenswrapper[4786]: I1209 08:58:09.889548 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-679bk" podStartSLOduration=2.31688981 podStartE2EDuration="11.889523955s" podCreationTimestamp="2025-12-09 08:57:58 +0000 UTC" firstStartedPulling="2025-12-09 08:57:59.695713234 +0000 UTC m=+845.579334460" lastFinishedPulling="2025-12-09 08:58:09.268347359 +0000 UTC m=+855.151968605" observedRunningTime="2025-12-09 08:58:09.888339445 +0000 UTC m=+855.771960681" watchObservedRunningTime="2025-12-09 08:58:09.889523955 +0000 UTC m=+855.773145181" Dec 09 08:58:09 crc kubenswrapper[4786]: I1209 08:58:09.975372 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sgqjs"] Dec 09 08:58:14 crc kubenswrapper[4786]: I1209 08:58:14.182822 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-cgcz2" Dec 09 08:58:19 crc kubenswrapper[4786]: I1209 08:58:19.177784 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-gszq4" Dec 09 08:58:23 crc kubenswrapper[4786]: I1209 08:58:23.966096 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4lrr8"] Dec 09 08:58:23 crc kubenswrapper[4786]: I1209 08:58:23.970021 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:23 crc kubenswrapper[4786]: I1209 08:58:23.982972 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lrr8"] Dec 09 08:58:23 crc kubenswrapper[4786]: I1209 08:58:23.986764 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnrpg\" (UniqueName: \"kubernetes.io/projected/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-kube-api-access-xnrpg\") pod \"redhat-marketplace-4lrr8\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:23 crc kubenswrapper[4786]: I1209 08:58:23.987002 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-utilities\") pod \"redhat-marketplace-4lrr8\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:23 crc kubenswrapper[4786]: I1209 08:58:23.987184 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-catalog-content\") pod \"redhat-marketplace-4lrr8\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:24 crc kubenswrapper[4786]: I1209 08:58:24.087756 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnrpg\" (UniqueName: \"kubernetes.io/projected/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-kube-api-access-xnrpg\") pod \"redhat-marketplace-4lrr8\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:24 crc kubenswrapper[4786]: I1209 08:58:24.087883 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-utilities\") pod \"redhat-marketplace-4lrr8\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:24 crc kubenswrapper[4786]: I1209 08:58:24.087931 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-catalog-content\") pod \"redhat-marketplace-4lrr8\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:24 crc kubenswrapper[4786]: I1209 08:58:24.088539 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-utilities\") pod \"redhat-marketplace-4lrr8\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:24 crc kubenswrapper[4786]: I1209 08:58:24.088732 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-catalog-content\") pod \"redhat-marketplace-4lrr8\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:24 crc kubenswrapper[4786]: I1209 08:58:24.111731 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnrpg\" (UniqueName: \"kubernetes.io/projected/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-kube-api-access-xnrpg\") pod \"redhat-marketplace-4lrr8\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:24 crc kubenswrapper[4786]: I1209 08:58:24.299407 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:24 crc kubenswrapper[4786]: I1209 08:58:24.792232 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lrr8"] Dec 09 08:58:24 crc kubenswrapper[4786]: W1209 08:58:24.809665 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda78962d2_88e3_48d2_ac18_9cdf6503a3a1.slice/crio-a2378b6ecf778a84d70f5e02e8b2ba4ef9d5637b2306c4a472d6ad05e6b20ab2 WatchSource:0}: Error finding container a2378b6ecf778a84d70f5e02e8b2ba4ef9d5637b2306c4a472d6ad05e6b20ab2: Status 404 returned error can't find the container with id a2378b6ecf778a84d70f5e02e8b2ba4ef9d5637b2306c4a472d6ad05e6b20ab2 Dec 09 08:58:24 crc kubenswrapper[4786]: I1209 08:58:24.977292 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lrr8" event={"ID":"a78962d2-88e3-48d2-ac18-9cdf6503a3a1","Type":"ContainerStarted","Data":"a2378b6ecf778a84d70f5e02e8b2ba4ef9d5637b2306c4a472d6ad05e6b20ab2"} Dec 09 08:58:25 crc kubenswrapper[4786]: I1209 08:58:25.985916 4786 generic.go:334] "Generic (PLEG): container finished" podID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerID="1ccf818e703077293c90d3946644782ddaf7f7c2aa1e297df9f3d877aea5ea7a" exitCode=0 Dec 09 08:58:25 crc kubenswrapper[4786]: I1209 08:58:25.986193 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lrr8" event={"ID":"a78962d2-88e3-48d2-ac18-9cdf6503a3a1","Type":"ContainerDied","Data":"1ccf818e703077293c90d3946644782ddaf7f7c2aa1e297df9f3d877aea5ea7a"} Dec 09 08:58:26 crc kubenswrapper[4786]: I1209 08:58:26.995847 4786 generic.go:334] "Generic (PLEG): container finished" podID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerID="9dcd9722a4eed05897b92456c046e848fcc948f46d348cd7445e75039c6151c8" exitCode=0 Dec 09 08:58:26 crc kubenswrapper[4786]: I1209 08:58:26.995954 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lrr8" event={"ID":"a78962d2-88e3-48d2-ac18-9cdf6503a3a1","Type":"ContainerDied","Data":"9dcd9722a4eed05897b92456c046e848fcc948f46d348cd7445e75039c6151c8"} Dec 09 08:58:28 crc kubenswrapper[4786]: I1209 08:58:28.004456 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lrr8" event={"ID":"a78962d2-88e3-48d2-ac18-9cdf6503a3a1","Type":"ContainerStarted","Data":"c5935de6e5a67a50b3caef938f50e393eaf177b183a870447e99b5825d8bafb0"} Dec 09 08:58:29 crc kubenswrapper[4786]: I1209 08:58:29.028945 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4lrr8" podStartSLOduration=4.494680712 podStartE2EDuration="6.028922758s" podCreationTimestamp="2025-12-09 08:58:23 +0000 UTC" firstStartedPulling="2025-12-09 08:58:25.989143033 +0000 UTC m=+871.872764259" lastFinishedPulling="2025-12-09 08:58:27.523385079 +0000 UTC m=+873.407006305" observedRunningTime="2025-12-09 08:58:29.027550923 +0000 UTC m=+874.911172149" watchObservedRunningTime="2025-12-09 08:58:29.028922758 +0000 UTC m=+874.912543984" Dec 09 08:58:33 crc kubenswrapper[4786]: I1209 08:58:33.952320 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ggvl8"] Dec 09 08:58:33 crc kubenswrapper[4786]: I1209 08:58:33.954498 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:33 crc kubenswrapper[4786]: I1209 08:58:33.957023 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ggvl8"] Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.058973 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8w89\" (UniqueName: \"kubernetes.io/projected/e28f084a-8957-4e50-87bc-1da44c04cf90-kube-api-access-l8w89\") pod \"community-operators-ggvl8\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.059041 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-catalog-content\") pod \"community-operators-ggvl8\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.059100 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-utilities\") pod \"community-operators-ggvl8\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.162651 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-catalog-content\") pod \"community-operators-ggvl8\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.162727 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-utilities\") pod \"community-operators-ggvl8\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.162792 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8w89\" (UniqueName: \"kubernetes.io/projected/e28f084a-8957-4e50-87bc-1da44c04cf90-kube-api-access-l8w89\") pod \"community-operators-ggvl8\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.163788 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-catalog-content\") pod \"community-operators-ggvl8\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.164028 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-utilities\") pod \"community-operators-ggvl8\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.186567 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8w89\" (UniqueName: \"kubernetes.io/projected/e28f084a-8957-4e50-87bc-1da44c04cf90-kube-api-access-l8w89\") pod \"community-operators-ggvl8\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.299933 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.300283 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.308689 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:34 crc kubenswrapper[4786]: I1209 08:58:34.440318 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.049627 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-sgqjs" podUID="bbd9538f-43ff-4c20-80ab-dcf783b7a558" containerName="console" containerID="cri-o://5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b" gracePeriod=15 Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.103045 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.157250 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ggvl8"] Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.420346 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sgqjs_bbd9538f-43ff-4c20-80ab-dcf783b7a558/console/0.log" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.421159 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.487411 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-trusted-ca-bundle\") pod \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.487965 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-oauth-config\") pod \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.488031 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-service-ca\") pod \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.488074 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lszt\" (UniqueName: \"kubernetes.io/projected/bbd9538f-43ff-4c20-80ab-dcf783b7a558-kube-api-access-7lszt\") pod \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.488183 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-serving-cert\") pod \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.488218 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-oauth-serving-cert\") pod \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.488237 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-config\") pod \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\" (UID: \"bbd9538f-43ff-4c20-80ab-dcf783b7a558\") " Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.489259 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bbd9538f-43ff-4c20-80ab-dcf783b7a558" (UID: "bbd9538f-43ff-4c20-80ab-dcf783b7a558"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.489323 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bbd9538f-43ff-4c20-80ab-dcf783b7a558" (UID: "bbd9538f-43ff-4c20-80ab-dcf783b7a558"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.490141 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-config" (OuterVolumeSpecName: "console-config") pod "bbd9538f-43ff-4c20-80ab-dcf783b7a558" (UID: "bbd9538f-43ff-4c20-80ab-dcf783b7a558"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.490262 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-service-ca" (OuterVolumeSpecName: "service-ca") pod "bbd9538f-43ff-4c20-80ab-dcf783b7a558" (UID: "bbd9538f-43ff-4c20-80ab-dcf783b7a558"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.496005 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bbd9538f-43ff-4c20-80ab-dcf783b7a558" (UID: "bbd9538f-43ff-4c20-80ab-dcf783b7a558"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.496359 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bbd9538f-43ff-4c20-80ab-dcf783b7a558" (UID: "bbd9538f-43ff-4c20-80ab-dcf783b7a558"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.503157 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbd9538f-43ff-4c20-80ab-dcf783b7a558-kube-api-access-7lszt" (OuterVolumeSpecName: "kube-api-access-7lszt") pod "bbd9538f-43ff-4c20-80ab-dcf783b7a558" (UID: "bbd9538f-43ff-4c20-80ab-dcf783b7a558"). InnerVolumeSpecName "kube-api-access-7lszt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.589910 4786 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.589962 4786 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.589973 4786 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-service-ca\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.589987 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lszt\" (UniqueName: \"kubernetes.io/projected/bbd9538f-43ff-4c20-80ab-dcf783b7a558-kube-api-access-7lszt\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.589998 4786 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.590011 4786 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:35 crc kubenswrapper[4786]: I1209 08:58:35.590021 4786 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bbd9538f-43ff-4c20-80ab-dcf783b7a558-console-config\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.053812 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-sgqjs_bbd9538f-43ff-4c20-80ab-dcf783b7a558/console/0.log" Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.055040 4786 generic.go:334] "Generic (PLEG): container finished" podID="bbd9538f-43ff-4c20-80ab-dcf783b7a558" containerID="5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b" exitCode=2 Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.055113 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sgqjs" event={"ID":"bbd9538f-43ff-4c20-80ab-dcf783b7a558","Type":"ContainerDied","Data":"5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b"} Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.055159 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-sgqjs" event={"ID":"bbd9538f-43ff-4c20-80ab-dcf783b7a558","Type":"ContainerDied","Data":"d14c4112ededad85a35113947d00e909e22e93f8df07a36216b374a2300b2fc4"} Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.055176 4786 scope.go:117] "RemoveContainer" containerID="5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b" Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.055183 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-sgqjs" Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.059324 4786 generic.go:334] "Generic (PLEG): container finished" podID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerID="7f93082021c9bae4f09714b41b703c89d4c9339b31c428b5853ccb5d6d4934d5" exitCode=0 Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.059389 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggvl8" event={"ID":"e28f084a-8957-4e50-87bc-1da44c04cf90","Type":"ContainerDied","Data":"7f93082021c9bae4f09714b41b703c89d4c9339b31c428b5853ccb5d6d4934d5"} Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.059443 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggvl8" event={"ID":"e28f084a-8957-4e50-87bc-1da44c04cf90","Type":"ContainerStarted","Data":"33981dbefb08d701331f59069242fcbcc85c8ece875b7b232ce1504a854bf814"} Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.077299 4786 scope.go:117] "RemoveContainer" containerID="5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b" Dec 09 08:58:36 crc kubenswrapper[4786]: E1209 08:58:36.078692 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b\": container with ID starting with 5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b not found: ID does not exist" containerID="5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b" Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.078730 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b"} err="failed to get container status \"5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b\": rpc error: code = NotFound desc = could not find container \"5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b\": container with ID starting with 5a36f343cd769a9079c6fb621a90c0193d00871395604c68e734bebcc98a448b not found: ID does not exist" Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.099035 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-sgqjs"] Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.102196 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-sgqjs"] Dec 09 08:58:36 crc kubenswrapper[4786]: I1209 08:58:36.717865 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lrr8"] Dec 09 08:58:37 crc kubenswrapper[4786]: I1209 08:58:37.071836 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4lrr8" podUID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerName="registry-server" containerID="cri-o://c5935de6e5a67a50b3caef938f50e393eaf177b183a870447e99b5825d8bafb0" gracePeriod=2 Dec 09 08:58:37 crc kubenswrapper[4786]: I1209 08:58:37.072985 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggvl8" event={"ID":"e28f084a-8957-4e50-87bc-1da44c04cf90","Type":"ContainerStarted","Data":"bee605f695badc50262a46439b94f8fdb6cd60de7f0fb30baf1612d77c4f784b"} Dec 09 08:58:37 crc kubenswrapper[4786]: I1209 08:58:37.197406 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbd9538f-43ff-4c20-80ab-dcf783b7a558" path="/var/lib/kubelet/pods/bbd9538f-43ff-4c20-80ab-dcf783b7a558/volumes" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.087766 4786 generic.go:334] "Generic (PLEG): container finished" podID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerID="c5935de6e5a67a50b3caef938f50e393eaf177b183a870447e99b5825d8bafb0" exitCode=0 Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.087815 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lrr8" event={"ID":"a78962d2-88e3-48d2-ac18-9cdf6503a3a1","Type":"ContainerDied","Data":"c5935de6e5a67a50b3caef938f50e393eaf177b183a870447e99b5825d8bafb0"} Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.089996 4786 generic.go:334] "Generic (PLEG): container finished" podID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerID="bee605f695badc50262a46439b94f8fdb6cd60de7f0fb30baf1612d77c4f784b" exitCode=0 Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.090039 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggvl8" event={"ID":"e28f084a-8957-4e50-87bc-1da44c04cf90","Type":"ContainerDied","Data":"bee605f695badc50262a46439b94f8fdb6cd60de7f0fb30baf1612d77c4f784b"} Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.220943 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px"] Dec 09 08:58:39 crc kubenswrapper[4786]: E1209 08:58:39.221174 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbd9538f-43ff-4c20-80ab-dcf783b7a558" containerName="console" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.221187 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbd9538f-43ff-4c20-80ab-dcf783b7a558" containerName="console" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.221306 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbd9538f-43ff-4c20-80ab-dcf783b7a558" containerName="console" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.222062 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.225225 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.232583 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px"] Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.357120 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.357270 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqzl7\" (UniqueName: \"kubernetes.io/projected/de7189e4-4dca-44a2-95d6-520828fc914f-kube-api-access-jqzl7\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.357340 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.458597 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.458671 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqzl7\" (UniqueName: \"kubernetes.io/projected/de7189e4-4dca-44a2-95d6-520828fc914f-kube-api-access-jqzl7\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.458708 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.459227 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.459300 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.488752 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqzl7\" (UniqueName: \"kubernetes.io/projected/de7189e4-4dca-44a2-95d6-520828fc914f-kube-api-access-jqzl7\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.550286 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.727206 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vk8bt"] Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.729980 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.739269 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vk8bt"] Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.864311 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-catalog-content\") pod \"certified-operators-vk8bt\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.864388 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jrh8\" (UniqueName: \"kubernetes.io/projected/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-kube-api-access-5jrh8\") pod \"certified-operators-vk8bt\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.864442 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-utilities\") pod \"certified-operators-vk8bt\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.966061 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-catalog-content\") pod \"certified-operators-vk8bt\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.966158 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jrh8\" (UniqueName: \"kubernetes.io/projected/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-kube-api-access-5jrh8\") pod \"certified-operators-vk8bt\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.966215 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-utilities\") pod \"certified-operators-vk8bt\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.966738 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-catalog-content\") pod \"certified-operators-vk8bt\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.966827 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-utilities\") pod \"certified-operators-vk8bt\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:39 crc kubenswrapper[4786]: I1209 08:58:39.988655 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jrh8\" (UniqueName: \"kubernetes.io/projected/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-kube-api-access-5jrh8\") pod \"certified-operators-vk8bt\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.045453 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.049547 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px"] Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.053804 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:40 crc kubenswrapper[4786]: W1209 08:58:40.057531 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde7189e4_4dca_44a2_95d6_520828fc914f.slice/crio-160739bb12a9702342f8fdae3ff00dec4a33a5b92fc2415fe969c48398a09c11 WatchSource:0}: Error finding container 160739bb12a9702342f8fdae3ff00dec4a33a5b92fc2415fe969c48398a09c11: Status 404 returned error can't find the container with id 160739bb12a9702342f8fdae3ff00dec4a33a5b92fc2415fe969c48398a09c11 Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.102482 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4lrr8" event={"ID":"a78962d2-88e3-48d2-ac18-9cdf6503a3a1","Type":"ContainerDied","Data":"a2378b6ecf778a84d70f5e02e8b2ba4ef9d5637b2306c4a472d6ad05e6b20ab2"} Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.102535 4786 scope.go:117] "RemoveContainer" containerID="c5935de6e5a67a50b3caef938f50e393eaf177b183a870447e99b5825d8bafb0" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.102655 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4lrr8" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.105363 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" event={"ID":"de7189e4-4dca-44a2-95d6-520828fc914f","Type":"ContainerStarted","Data":"160739bb12a9702342f8fdae3ff00dec4a33a5b92fc2415fe969c48398a09c11"} Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.120201 4786 scope.go:117] "RemoveContainer" containerID="9dcd9722a4eed05897b92456c046e848fcc948f46d348cd7445e75039c6151c8" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.151596 4786 scope.go:117] "RemoveContainer" containerID="1ccf818e703077293c90d3946644782ddaf7f7c2aa1e297df9f3d877aea5ea7a" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.168090 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-utilities\") pod \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.168184 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-catalog-content\") pod \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.168291 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnrpg\" (UniqueName: \"kubernetes.io/projected/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-kube-api-access-xnrpg\") pod \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\" (UID: \"a78962d2-88e3-48d2-ac18-9cdf6503a3a1\") " Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.169570 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-utilities" (OuterVolumeSpecName: "utilities") pod "a78962d2-88e3-48d2-ac18-9cdf6503a3a1" (UID: "a78962d2-88e3-48d2-ac18-9cdf6503a3a1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.175272 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-kube-api-access-xnrpg" (OuterVolumeSpecName: "kube-api-access-xnrpg") pod "a78962d2-88e3-48d2-ac18-9cdf6503a3a1" (UID: "a78962d2-88e3-48d2-ac18-9cdf6503a3a1"). InnerVolumeSpecName "kube-api-access-xnrpg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.198714 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a78962d2-88e3-48d2-ac18-9cdf6503a3a1" (UID: "a78962d2-88e3-48d2-ac18-9cdf6503a3a1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.270654 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.270687 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.270697 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnrpg\" (UniqueName: \"kubernetes.io/projected/a78962d2-88e3-48d2-ac18-9cdf6503a3a1-kube-api-access-xnrpg\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.449674 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lrr8"] Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.464299 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4lrr8"] Dec 09 08:58:40 crc kubenswrapper[4786]: I1209 08:58:40.534412 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vk8bt"] Dec 09 08:58:41 crc kubenswrapper[4786]: I1209 08:58:41.117828 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" event={"ID":"de7189e4-4dca-44a2-95d6-520828fc914f","Type":"ContainerStarted","Data":"c45c822792676ff762535976aeba7948f6d2190222428912bdac792267f2948f"} Dec 09 08:58:41 crc kubenswrapper[4786]: I1209 08:58:41.118951 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk8bt" event={"ID":"e374f01d-8aae-4414-9aaf-aaaa5fa398f7","Type":"ContainerStarted","Data":"52ebb6c5235a0e4a8b38d0d6fa0da96611460685a2d65dfa08a6cad4ebb77fdf"} Dec 09 08:58:41 crc kubenswrapper[4786]: I1209 08:58:41.195391 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" path="/var/lib/kubelet/pods/a78962d2-88e3-48d2-ac18-9cdf6503a3a1/volumes" Dec 09 08:58:42 crc kubenswrapper[4786]: I1209 08:58:42.127237 4786 generic.go:334] "Generic (PLEG): container finished" podID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerID="8edae8d3ca9f995e74be9011985432fefcea298c7140d39bd60823cf4edace01" exitCode=0 Dec 09 08:58:42 crc kubenswrapper[4786]: I1209 08:58:42.127330 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk8bt" event={"ID":"e374f01d-8aae-4414-9aaf-aaaa5fa398f7","Type":"ContainerDied","Data":"8edae8d3ca9f995e74be9011985432fefcea298c7140d39bd60823cf4edace01"} Dec 09 08:58:42 crc kubenswrapper[4786]: I1209 08:58:42.129696 4786 generic.go:334] "Generic (PLEG): container finished" podID="de7189e4-4dca-44a2-95d6-520828fc914f" containerID="c45c822792676ff762535976aeba7948f6d2190222428912bdac792267f2948f" exitCode=0 Dec 09 08:58:42 crc kubenswrapper[4786]: I1209 08:58:42.129734 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" event={"ID":"de7189e4-4dca-44a2-95d6-520828fc914f","Type":"ContainerDied","Data":"c45c822792676ff762535976aeba7948f6d2190222428912bdac792267f2948f"} Dec 09 08:58:44 crc kubenswrapper[4786]: I1209 08:58:44.227237 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggvl8" event={"ID":"e28f084a-8957-4e50-87bc-1da44c04cf90","Type":"ContainerStarted","Data":"d4bc1ce97b8a16b9f69a80aa6b74677616533b4417a1e720e755227bc1e84da9"} Dec 09 08:58:44 crc kubenswrapper[4786]: I1209 08:58:44.253974 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ggvl8" podStartSLOduration=3.686659399 podStartE2EDuration="11.253926738s" podCreationTimestamp="2025-12-09 08:58:33 +0000 UTC" firstStartedPulling="2025-12-09 08:58:36.06132313 +0000 UTC m=+881.944944356" lastFinishedPulling="2025-12-09 08:58:43.628590459 +0000 UTC m=+889.512211695" observedRunningTime="2025-12-09 08:58:44.249039896 +0000 UTC m=+890.132661142" watchObservedRunningTime="2025-12-09 08:58:44.253926738 +0000 UTC m=+890.137547964" Dec 09 08:58:44 crc kubenswrapper[4786]: I1209 08:58:44.309848 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:44 crc kubenswrapper[4786]: I1209 08:58:44.309918 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:45 crc kubenswrapper[4786]: I1209 08:58:45.234255 4786 generic.go:334] "Generic (PLEG): container finished" podID="de7189e4-4dca-44a2-95d6-520828fc914f" containerID="2775e152762bf7dc02978c7377b88fe14931dedde481134b3b8a82b966870752" exitCode=0 Dec 09 08:58:45 crc kubenswrapper[4786]: I1209 08:58:45.234336 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" event={"ID":"de7189e4-4dca-44a2-95d6-520828fc914f","Type":"ContainerDied","Data":"2775e152762bf7dc02978c7377b88fe14931dedde481134b3b8a82b966870752"} Dec 09 08:58:45 crc kubenswrapper[4786]: I1209 08:58:45.238259 4786 generic.go:334] "Generic (PLEG): container finished" podID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerID="8d0713d6332c474c8907a52c51e964064ffd4fa1ab82afca86ab6bc91a9b1869" exitCode=0 Dec 09 08:58:45 crc kubenswrapper[4786]: I1209 08:58:45.238325 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk8bt" event={"ID":"e374f01d-8aae-4414-9aaf-aaaa5fa398f7","Type":"ContainerDied","Data":"8d0713d6332c474c8907a52c51e964064ffd4fa1ab82afca86ab6bc91a9b1869"} Dec 09 08:58:45 crc kubenswrapper[4786]: I1209 08:58:45.360791 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ggvl8" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerName="registry-server" probeResult="failure" output=< Dec 09 08:58:45 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 08:58:45 crc kubenswrapper[4786]: > Dec 09 08:58:46 crc kubenswrapper[4786]: I1209 08:58:46.246751 4786 generic.go:334] "Generic (PLEG): container finished" podID="de7189e4-4dca-44a2-95d6-520828fc914f" containerID="72d970aa9976f85537588c3ac53dbc57fc8be9084da80b2c6756dc2211ca1b31" exitCode=0 Dec 09 08:58:46 crc kubenswrapper[4786]: I1209 08:58:46.246850 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" event={"ID":"de7189e4-4dca-44a2-95d6-520828fc914f","Type":"ContainerDied","Data":"72d970aa9976f85537588c3ac53dbc57fc8be9084da80b2c6756dc2211ca1b31"} Dec 09 08:58:46 crc kubenswrapper[4786]: I1209 08:58:46.249604 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk8bt" event={"ID":"e374f01d-8aae-4414-9aaf-aaaa5fa398f7","Type":"ContainerStarted","Data":"44f33131193064ebf63330c7a777b5df1e143d97f5aecc41e09ca1955249e9bc"} Dec 09 08:58:46 crc kubenswrapper[4786]: I1209 08:58:46.292578 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vk8bt" podStartSLOduration=3.783733485 podStartE2EDuration="7.292558196s" podCreationTimestamp="2025-12-09 08:58:39 +0000 UTC" firstStartedPulling="2025-12-09 08:58:42.12939189 +0000 UTC m=+888.013013116" lastFinishedPulling="2025-12-09 08:58:45.638216591 +0000 UTC m=+891.521837827" observedRunningTime="2025-12-09 08:58:46.288788193 +0000 UTC m=+892.172409429" watchObservedRunningTime="2025-12-09 08:58:46.292558196 +0000 UTC m=+892.176179422" Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.690195 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.798270 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqzl7\" (UniqueName: \"kubernetes.io/projected/de7189e4-4dca-44a2-95d6-520828fc914f-kube-api-access-jqzl7\") pod \"de7189e4-4dca-44a2-95d6-520828fc914f\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.798656 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-util\") pod \"de7189e4-4dca-44a2-95d6-520828fc914f\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.798687 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-bundle\") pod \"de7189e4-4dca-44a2-95d6-520828fc914f\" (UID: \"de7189e4-4dca-44a2-95d6-520828fc914f\") " Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.799707 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-bundle" (OuterVolumeSpecName: "bundle") pod "de7189e4-4dca-44a2-95d6-520828fc914f" (UID: "de7189e4-4dca-44a2-95d6-520828fc914f"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.803687 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7189e4-4dca-44a2-95d6-520828fc914f-kube-api-access-jqzl7" (OuterVolumeSpecName: "kube-api-access-jqzl7") pod "de7189e4-4dca-44a2-95d6-520828fc914f" (UID: "de7189e4-4dca-44a2-95d6-520828fc914f"). InnerVolumeSpecName "kube-api-access-jqzl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.808527 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-util" (OuterVolumeSpecName: "util") pod "de7189e4-4dca-44a2-95d6-520828fc914f" (UID: "de7189e4-4dca-44a2-95d6-520828fc914f"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.900669 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jqzl7\" (UniqueName: \"kubernetes.io/projected/de7189e4-4dca-44a2-95d6-520828fc914f-kube-api-access-jqzl7\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.900705 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-util\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:47 crc kubenswrapper[4786]: I1209 08:58:47.900715 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/de7189e4-4dca-44a2-95d6-520828fc914f-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:48 crc kubenswrapper[4786]: I1209 08:58:48.267198 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" event={"ID":"de7189e4-4dca-44a2-95d6-520828fc914f","Type":"ContainerDied","Data":"160739bb12a9702342f8fdae3ff00dec4a33a5b92fc2415fe969c48398a09c11"} Dec 09 08:58:48 crc kubenswrapper[4786]: I1209 08:58:48.267277 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="160739bb12a9702342f8fdae3ff00dec4a33a5b92fc2415fe969c48398a09c11" Dec 09 08:58:48 crc kubenswrapper[4786]: I1209 08:58:48.267310 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px" Dec 09 08:58:50 crc kubenswrapper[4786]: I1209 08:58:50.054523 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:50 crc kubenswrapper[4786]: I1209 08:58:50.054578 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:50 crc kubenswrapper[4786]: I1209 08:58:50.123198 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:58:54 crc kubenswrapper[4786]: I1209 08:58:54.394360 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:54 crc kubenswrapper[4786]: I1209 08:58:54.448222 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.830256 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp"] Dec 09 08:58:55 crc kubenswrapper[4786]: E1209 08:58:55.831310 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7189e4-4dca-44a2-95d6-520828fc914f" containerName="pull" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.831399 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7189e4-4dca-44a2-95d6-520828fc914f" containerName="pull" Dec 09 08:58:55 crc kubenswrapper[4786]: E1209 08:58:55.831498 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerName="extract-utilities" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.831558 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerName="extract-utilities" Dec 09 08:58:55 crc kubenswrapper[4786]: E1209 08:58:55.831628 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerName="registry-server" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.831691 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerName="registry-server" Dec 09 08:58:55 crc kubenswrapper[4786]: E1209 08:58:55.831751 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerName="extract-content" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.831803 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerName="extract-content" Dec 09 08:58:55 crc kubenswrapper[4786]: E1209 08:58:55.831862 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7189e4-4dca-44a2-95d6-520828fc914f" containerName="extract" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.831914 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7189e4-4dca-44a2-95d6-520828fc914f" containerName="extract" Dec 09 08:58:55 crc kubenswrapper[4786]: E1209 08:58:55.831980 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de7189e4-4dca-44a2-95d6-520828fc914f" containerName="util" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.832049 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="de7189e4-4dca-44a2-95d6-520828fc914f" containerName="util" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.832247 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78962d2-88e3-48d2-ac18-9cdf6503a3a1" containerName="registry-server" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.832314 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="de7189e4-4dca-44a2-95d6-520828fc914f" containerName="extract" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.832861 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.839543 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.839604 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.839816 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-mj8hq" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.839627 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.839691 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.850532 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp"] Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.870569 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2415f03a-5796-4063-aa38-791dc0a76fec-apiservice-cert\") pod \"metallb-operator-controller-manager-7bd64dc485-knxdp\" (UID: \"2415f03a-5796-4063-aa38-791dc0a76fec\") " pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.870909 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncsbd\" (UniqueName: \"kubernetes.io/projected/2415f03a-5796-4063-aa38-791dc0a76fec-kube-api-access-ncsbd\") pod \"metallb-operator-controller-manager-7bd64dc485-knxdp\" (UID: \"2415f03a-5796-4063-aa38-791dc0a76fec\") " pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.870995 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2415f03a-5796-4063-aa38-791dc0a76fec-webhook-cert\") pod \"metallb-operator-controller-manager-7bd64dc485-knxdp\" (UID: \"2415f03a-5796-4063-aa38-791dc0a76fec\") " pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.972084 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2415f03a-5796-4063-aa38-791dc0a76fec-apiservice-cert\") pod \"metallb-operator-controller-manager-7bd64dc485-knxdp\" (UID: \"2415f03a-5796-4063-aa38-791dc0a76fec\") " pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.972199 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncsbd\" (UniqueName: \"kubernetes.io/projected/2415f03a-5796-4063-aa38-791dc0a76fec-kube-api-access-ncsbd\") pod \"metallb-operator-controller-manager-7bd64dc485-knxdp\" (UID: \"2415f03a-5796-4063-aa38-791dc0a76fec\") " pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.972237 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2415f03a-5796-4063-aa38-791dc0a76fec-webhook-cert\") pod \"metallb-operator-controller-manager-7bd64dc485-knxdp\" (UID: \"2415f03a-5796-4063-aa38-791dc0a76fec\") " pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.982480 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2415f03a-5796-4063-aa38-791dc0a76fec-apiservice-cert\") pod \"metallb-operator-controller-manager-7bd64dc485-knxdp\" (UID: \"2415f03a-5796-4063-aa38-791dc0a76fec\") " pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.982491 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2415f03a-5796-4063-aa38-791dc0a76fec-webhook-cert\") pod \"metallb-operator-controller-manager-7bd64dc485-knxdp\" (UID: \"2415f03a-5796-4063-aa38-791dc0a76fec\") " pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:55 crc kubenswrapper[4786]: I1209 08:58:55.994956 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncsbd\" (UniqueName: \"kubernetes.io/projected/2415f03a-5796-4063-aa38-791dc0a76fec-kube-api-access-ncsbd\") pod \"metallb-operator-controller-manager-7bd64dc485-knxdp\" (UID: \"2415f03a-5796-4063-aa38-791dc0a76fec\") " pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.155087 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.183853 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj"] Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.184729 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.186782 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-km85q" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.197876 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.198360 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.211115 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj"] Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.275752 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21-webhook-cert\") pod \"metallb-operator-webhook-server-5786b6d7bd-6ntsj\" (UID: \"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21\") " pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.275819 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21-apiservice-cert\") pod \"metallb-operator-webhook-server-5786b6d7bd-6ntsj\" (UID: \"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21\") " pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.275869 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klhgj\" (UniqueName: \"kubernetes.io/projected/b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21-kube-api-access-klhgj\") pod \"metallb-operator-webhook-server-5786b6d7bd-6ntsj\" (UID: \"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21\") " pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.378076 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-klhgj\" (UniqueName: \"kubernetes.io/projected/b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21-kube-api-access-klhgj\") pod \"metallb-operator-webhook-server-5786b6d7bd-6ntsj\" (UID: \"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21\") " pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.378536 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21-webhook-cert\") pod \"metallb-operator-webhook-server-5786b6d7bd-6ntsj\" (UID: \"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21\") " pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.378578 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21-apiservice-cert\") pod \"metallb-operator-webhook-server-5786b6d7bd-6ntsj\" (UID: \"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21\") " pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.387975 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21-apiservice-cert\") pod \"metallb-operator-webhook-server-5786b6d7bd-6ntsj\" (UID: \"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21\") " pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.400459 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21-webhook-cert\") pod \"metallb-operator-webhook-server-5786b6d7bd-6ntsj\" (UID: \"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21\") " pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.412153 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-klhgj\" (UniqueName: \"kubernetes.io/projected/b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21-kube-api-access-klhgj\") pod \"metallb-operator-webhook-server-5786b6d7bd-6ntsj\" (UID: \"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21\") " pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.577951 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:58:56 crc kubenswrapper[4786]: W1209 08:58:56.612680 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2415f03a_5796_4063_aa38_791dc0a76fec.slice/crio-598e0d1ece25d65a6a99e3ecbf181121f1c1cec7d5a29a2a3e22dfc780cab52e WatchSource:0}: Error finding container 598e0d1ece25d65a6a99e3ecbf181121f1c1cec7d5a29a2a3e22dfc780cab52e: Status 404 returned error can't find the container with id 598e0d1ece25d65a6a99e3ecbf181121f1c1cec7d5a29a2a3e22dfc780cab52e Dec 09 08:58:56 crc kubenswrapper[4786]: I1209 08:58:56.617114 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp"] Dec 09 08:58:57 crc kubenswrapper[4786]: I1209 08:58:57.101964 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj"] Dec 09 08:58:57 crc kubenswrapper[4786]: I1209 08:58:57.383961 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" event={"ID":"2415f03a-5796-4063-aa38-791dc0a76fec","Type":"ContainerStarted","Data":"598e0d1ece25d65a6a99e3ecbf181121f1c1cec7d5a29a2a3e22dfc780cab52e"} Dec 09 08:58:57 crc kubenswrapper[4786]: I1209 08:58:57.385114 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" event={"ID":"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21","Type":"ContainerStarted","Data":"9bafc78eaf444abe96b39d309497555b45b608914f60f720f98deacffea5050b"} Dec 09 08:58:57 crc kubenswrapper[4786]: I1209 08:58:57.715882 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ggvl8"] Dec 09 08:58:57 crc kubenswrapper[4786]: I1209 08:58:57.716634 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ggvl8" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerName="registry-server" containerID="cri-o://d4bc1ce97b8a16b9f69a80aa6b74677616533b4417a1e720e755227bc1e84da9" gracePeriod=2 Dec 09 08:58:58 crc kubenswrapper[4786]: I1209 08:58:58.407526 4786 generic.go:334] "Generic (PLEG): container finished" podID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerID="d4bc1ce97b8a16b9f69a80aa6b74677616533b4417a1e720e755227bc1e84da9" exitCode=0 Dec 09 08:58:58 crc kubenswrapper[4786]: I1209 08:58:58.407571 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggvl8" event={"ID":"e28f084a-8957-4e50-87bc-1da44c04cf90","Type":"ContainerDied","Data":"d4bc1ce97b8a16b9f69a80aa6b74677616533b4417a1e720e755227bc1e84da9"} Dec 09 08:58:58 crc kubenswrapper[4786]: I1209 08:58:58.720055 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:58 crc kubenswrapper[4786]: I1209 08:58:58.824228 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-catalog-content\") pod \"e28f084a-8957-4e50-87bc-1da44c04cf90\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " Dec 09 08:58:58 crc kubenswrapper[4786]: I1209 08:58:58.824292 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8w89\" (UniqueName: \"kubernetes.io/projected/e28f084a-8957-4e50-87bc-1da44c04cf90-kube-api-access-l8w89\") pod \"e28f084a-8957-4e50-87bc-1da44c04cf90\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " Dec 09 08:58:58 crc kubenswrapper[4786]: I1209 08:58:58.824471 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-utilities\") pod \"e28f084a-8957-4e50-87bc-1da44c04cf90\" (UID: \"e28f084a-8957-4e50-87bc-1da44c04cf90\") " Dec 09 08:58:58 crc kubenswrapper[4786]: I1209 08:58:58.825940 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-utilities" (OuterVolumeSpecName: "utilities") pod "e28f084a-8957-4e50-87bc-1da44c04cf90" (UID: "e28f084a-8957-4e50-87bc-1da44c04cf90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:58:58 crc kubenswrapper[4786]: I1209 08:58:58.845636 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e28f084a-8957-4e50-87bc-1da44c04cf90-kube-api-access-l8w89" (OuterVolumeSpecName: "kube-api-access-l8w89") pod "e28f084a-8957-4e50-87bc-1da44c04cf90" (UID: "e28f084a-8957-4e50-87bc-1da44c04cf90"). InnerVolumeSpecName "kube-api-access-l8w89". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:58:58 crc kubenswrapper[4786]: I1209 08:58:58.896100 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e28f084a-8957-4e50-87bc-1da44c04cf90" (UID: "e28f084a-8957-4e50-87bc-1da44c04cf90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.001803 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.001849 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e28f084a-8957-4e50-87bc-1da44c04cf90-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.001976 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8w89\" (UniqueName: \"kubernetes.io/projected/e28f084a-8957-4e50-87bc-1da44c04cf90-kube-api-access-l8w89\") on node \"crc\" DevicePath \"\"" Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.426217 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggvl8" event={"ID":"e28f084a-8957-4e50-87bc-1da44c04cf90","Type":"ContainerDied","Data":"33981dbefb08d701331f59069242fcbcc85c8ece875b7b232ce1504a854bf814"} Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.426683 4786 scope.go:117] "RemoveContainer" containerID="d4bc1ce97b8a16b9f69a80aa6b74677616533b4417a1e720e755227bc1e84da9" Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.426486 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggvl8" Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.451505 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ggvl8"] Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.453299 4786 scope.go:117] "RemoveContainer" containerID="bee605f695badc50262a46439b94f8fdb6cd60de7f0fb30baf1612d77c4f784b" Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.460157 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ggvl8"] Dec 09 08:58:59 crc kubenswrapper[4786]: I1209 08:58:59.484002 4786 scope.go:117] "RemoveContainer" containerID="7f93082021c9bae4f09714b41b703c89d4c9339b31c428b5853ccb5d6d4934d5" Dec 09 08:59:00 crc kubenswrapper[4786]: I1209 08:59:00.110788 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:59:01 crc kubenswrapper[4786]: I1209 08:59:01.201730 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" path="/var/lib/kubelet/pods/e28f084a-8957-4e50-87bc-1da44c04cf90/volumes" Dec 09 08:59:02 crc kubenswrapper[4786]: I1209 08:59:02.920322 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vk8bt"] Dec 09 08:59:02 crc kubenswrapper[4786]: I1209 08:59:02.920811 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vk8bt" podUID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerName="registry-server" containerID="cri-o://44f33131193064ebf63330c7a777b5df1e143d97f5aecc41e09ca1955249e9bc" gracePeriod=2 Dec 09 08:59:03 crc kubenswrapper[4786]: I1209 08:59:03.461358 4786 generic.go:334] "Generic (PLEG): container finished" podID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerID="44f33131193064ebf63330c7a777b5df1e143d97f5aecc41e09ca1955249e9bc" exitCode=0 Dec 09 08:59:03 crc kubenswrapper[4786]: I1209 08:59:03.461419 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk8bt" event={"ID":"e374f01d-8aae-4414-9aaf-aaaa5fa398f7","Type":"ContainerDied","Data":"44f33131193064ebf63330c7a777b5df1e143d97f5aecc41e09ca1955249e9bc"} Dec 09 08:59:03 crc kubenswrapper[4786]: I1209 08:59:03.908535 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.152396 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-utilities\") pod \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.152577 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-catalog-content\") pod \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.152659 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jrh8\" (UniqueName: \"kubernetes.io/projected/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-kube-api-access-5jrh8\") pod \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\" (UID: \"e374f01d-8aae-4414-9aaf-aaaa5fa398f7\") " Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.154419 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-utilities" (OuterVolumeSpecName: "utilities") pod "e374f01d-8aae-4414-9aaf-aaaa5fa398f7" (UID: "e374f01d-8aae-4414-9aaf-aaaa5fa398f7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.173087 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-kube-api-access-5jrh8" (OuterVolumeSpecName: "kube-api-access-5jrh8") pod "e374f01d-8aae-4414-9aaf-aaaa5fa398f7" (UID: "e374f01d-8aae-4414-9aaf-aaaa5fa398f7"). InnerVolumeSpecName "kube-api-access-5jrh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.224637 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e374f01d-8aae-4414-9aaf-aaaa5fa398f7" (UID: "e374f01d-8aae-4414-9aaf-aaaa5fa398f7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.254828 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.254955 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.255033 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jrh8\" (UniqueName: \"kubernetes.io/projected/e374f01d-8aae-4414-9aaf-aaaa5fa398f7-kube-api-access-5jrh8\") on node \"crc\" DevicePath \"\"" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.470825 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" event={"ID":"2415f03a-5796-4063-aa38-791dc0a76fec","Type":"ContainerStarted","Data":"d3427bfdc2369e8f1807509922d8a25a9697f5f256ec93365fbe0ef27d173e11"} Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.470988 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.475074 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vk8bt" event={"ID":"e374f01d-8aae-4414-9aaf-aaaa5fa398f7","Type":"ContainerDied","Data":"52ebb6c5235a0e4a8b38d0d6fa0da96611460685a2d65dfa08a6cad4ebb77fdf"} Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.475140 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vk8bt" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.475137 4786 scope.go:117] "RemoveContainer" containerID="44f33131193064ebf63330c7a777b5df1e143d97f5aecc41e09ca1955249e9bc" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.476631 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" event={"ID":"b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21","Type":"ContainerStarted","Data":"1a3fed90ec8be68ec36bef8b3233876a2a4840ab73f1567dffa13115d30b55ff"} Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.476867 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.498585 4786 scope.go:117] "RemoveContainer" containerID="8d0713d6332c474c8907a52c51e964064ffd4fa1ab82afca86ab6bc91a9b1869" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.514326 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" podStartSLOduration=5.705619057 podStartE2EDuration="9.514302663s" podCreationTimestamp="2025-12-09 08:58:55 +0000 UTC" firstStartedPulling="2025-12-09 08:58:56.62075761 +0000 UTC m=+902.504378846" lastFinishedPulling="2025-12-09 08:59:00.429441226 +0000 UTC m=+906.313062452" observedRunningTime="2025-12-09 08:59:04.501515993 +0000 UTC m=+910.385137249" watchObservedRunningTime="2025-12-09 08:59:04.514302663 +0000 UTC m=+910.397923889" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.531703 4786 scope.go:117] "RemoveContainer" containerID="8edae8d3ca9f995e74be9011985432fefcea298c7140d39bd60823cf4edace01" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.549575 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" podStartSLOduration=2.1472541019999998 podStartE2EDuration="8.549553467s" podCreationTimestamp="2025-12-09 08:58:56 +0000 UTC" firstStartedPulling="2025-12-09 08:58:57.135351649 +0000 UTC m=+903.018972885" lastFinishedPulling="2025-12-09 08:59:03.537651014 +0000 UTC m=+909.421272250" observedRunningTime="2025-12-09 08:59:04.535255029 +0000 UTC m=+910.418876285" watchObservedRunningTime="2025-12-09 08:59:04.549553467 +0000 UTC m=+910.433174703" Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.550960 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vk8bt"] Dec 09 08:59:04 crc kubenswrapper[4786]: I1209 08:59:04.555679 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vk8bt"] Dec 09 08:59:05 crc kubenswrapper[4786]: I1209 08:59:05.197512 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" path="/var/lib/kubelet/pods/e374f01d-8aae-4414-9aaf-aaaa5fa398f7/volumes" Dec 09 08:59:16 crc kubenswrapper[4786]: I1209 08:59:16.595985 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5786b6d7bd-6ntsj" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.160164 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7bd64dc485-knxdp" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.928656 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-hq5kf"] Dec 09 08:59:36 crc kubenswrapper[4786]: E1209 08:59:36.928967 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerName="extract-content" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.928987 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerName="extract-content" Dec 09 08:59:36 crc kubenswrapper[4786]: E1209 08:59:36.929018 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerName="registry-server" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.929027 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerName="registry-server" Dec 09 08:59:36 crc kubenswrapper[4786]: E1209 08:59:36.929044 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerName="extract-utilities" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.929052 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerName="extract-utilities" Dec 09 08:59:36 crc kubenswrapper[4786]: E1209 08:59:36.929067 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerName="extract-content" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.929074 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerName="extract-content" Dec 09 08:59:36 crc kubenswrapper[4786]: E1209 08:59:36.929083 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerName="registry-server" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.929091 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerName="registry-server" Dec 09 08:59:36 crc kubenswrapper[4786]: E1209 08:59:36.929107 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerName="extract-utilities" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.929115 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerName="extract-utilities" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.929243 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e28f084a-8957-4e50-87bc-1da44c04cf90" containerName="registry-server" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.929263 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e374f01d-8aae-4414-9aaf-aaaa5fa398f7" containerName="registry-server" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.931692 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.934704 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.935189 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.935782 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-j4q44" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.941982 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn"] Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.943107 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.946935 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 09 08:59:36 crc kubenswrapper[4786]: I1209 08:59:36.951400 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn"] Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.038668 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-fpvm7"] Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.039723 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.044778 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.044923 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.045077 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-grj8m" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.045679 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.049542 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-dtbr4"] Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.051044 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.053844 4786 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055155 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsqkq\" (UniqueName: \"kubernetes.io/projected/e7177936-18a2-4469-bf7b-cd9db745d93f-kube-api-access-bsqkq\") pod \"frr-k8s-webhook-server-7fcb986d4-kcwjn\" (UID: \"e7177936-18a2-4469-bf7b-cd9db745d93f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055188 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6e123ec9-00ea-466d-b5f6-79cad587a2cc-frr-startup\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055207 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-memberlist\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055224 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-metrics\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055254 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-reloader\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055272 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqsz6\" (UniqueName: \"kubernetes.io/projected/8a0660c9-5ef5-4ed7-a304-3690e32fb830-kube-api-access-cqsz6\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055290 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e123ec9-00ea-466d-b5f6-79cad587a2cc-metrics-certs\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055308 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-metrics-certs\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055333 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtjt7\" (UniqueName: \"kubernetes.io/projected/2ba38124-9926-44fd-b5c5-2adb47fd814a-kube-api-access-xtjt7\") pod \"controller-f8648f98b-dtbr4\" (UID: \"2ba38124-9926-44fd-b5c5-2adb47fd814a\") " pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055348 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-frr-conf\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055365 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ba38124-9926-44fd-b5c5-2adb47fd814a-cert\") pod \"controller-f8648f98b-dtbr4\" (UID: \"2ba38124-9926-44fd-b5c5-2adb47fd814a\") " pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055409 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-frr-sockets\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055453 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8a0660c9-5ef5-4ed7-a304-3690e32fb830-metallb-excludel2\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055473 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5fqn\" (UniqueName: \"kubernetes.io/projected/6e123ec9-00ea-466d-b5f6-79cad587a2cc-kube-api-access-p5fqn\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055488 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ba38124-9926-44fd-b5c5-2adb47fd814a-metrics-certs\") pod \"controller-f8648f98b-dtbr4\" (UID: \"2ba38124-9926-44fd-b5c5-2adb47fd814a\") " pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.055503 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7177936-18a2-4469-bf7b-cd9db745d93f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kcwjn\" (UID: \"e7177936-18a2-4469-bf7b-cd9db745d93f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.069311 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-dtbr4"] Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156329 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtjt7\" (UniqueName: \"kubernetes.io/projected/2ba38124-9926-44fd-b5c5-2adb47fd814a-kube-api-access-xtjt7\") pod \"controller-f8648f98b-dtbr4\" (UID: \"2ba38124-9926-44fd-b5c5-2adb47fd814a\") " pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156372 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-frr-conf\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156390 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-frr-sockets\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156407 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ba38124-9926-44fd-b5c5-2adb47fd814a-cert\") pod \"controller-f8648f98b-dtbr4\" (UID: \"2ba38124-9926-44fd-b5c5-2adb47fd814a\") " pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156449 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8a0660c9-5ef5-4ed7-a304-3690e32fb830-metallb-excludel2\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156466 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5fqn\" (UniqueName: \"kubernetes.io/projected/6e123ec9-00ea-466d-b5f6-79cad587a2cc-kube-api-access-p5fqn\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156480 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ba38124-9926-44fd-b5c5-2adb47fd814a-metrics-certs\") pod \"controller-f8648f98b-dtbr4\" (UID: \"2ba38124-9926-44fd-b5c5-2adb47fd814a\") " pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156496 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7177936-18a2-4469-bf7b-cd9db745d93f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kcwjn\" (UID: \"e7177936-18a2-4469-bf7b-cd9db745d93f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156535 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsqkq\" (UniqueName: \"kubernetes.io/projected/e7177936-18a2-4469-bf7b-cd9db745d93f-kube-api-access-bsqkq\") pod \"frr-k8s-webhook-server-7fcb986d4-kcwjn\" (UID: \"e7177936-18a2-4469-bf7b-cd9db745d93f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156553 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6e123ec9-00ea-466d-b5f6-79cad587a2cc-frr-startup\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156567 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-memberlist\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156581 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-metrics\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156610 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-reloader\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156626 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqsz6\" (UniqueName: \"kubernetes.io/projected/8a0660c9-5ef5-4ed7-a304-3690e32fb830-kube-api-access-cqsz6\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156645 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e123ec9-00ea-466d-b5f6-79cad587a2cc-metrics-certs\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.156665 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-metrics-certs\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: E1209 08:59:37.156782 4786 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 09 08:59:37 crc kubenswrapper[4786]: E1209 08:59:37.156834 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-metrics-certs podName:8a0660c9-5ef5-4ed7-a304-3690e32fb830 nodeName:}" failed. No retries permitted until 2025-12-09 08:59:37.656816204 +0000 UTC m=+943.540437430 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-metrics-certs") pod "speaker-fpvm7" (UID: "8a0660c9-5ef5-4ed7-a304-3690e32fb830") : secret "speaker-certs-secret" not found Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.157652 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-frr-conf\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.157834 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-frr-sockets\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: E1209 08:59:37.158918 4786 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 08:59:37 crc kubenswrapper[4786]: E1209 08:59:37.159005 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-memberlist podName:8a0660c9-5ef5-4ed7-a304-3690e32fb830 nodeName:}" failed. No retries permitted until 2025-12-09 08:59:37.658983929 +0000 UTC m=+943.542605155 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-memberlist") pod "speaker-fpvm7" (UID: "8a0660c9-5ef5-4ed7-a304-3690e32fb830") : secret "metallb-memberlist" not found Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.159305 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-metrics\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.159482 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/8a0660c9-5ef5-4ed7-a304-3690e32fb830-metallb-excludel2\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.159578 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/6e123ec9-00ea-466d-b5f6-79cad587a2cc-reloader\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.160141 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/6e123ec9-00ea-466d-b5f6-79cad587a2cc-frr-startup\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.165730 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2ba38124-9926-44fd-b5c5-2adb47fd814a-metrics-certs\") pod \"controller-f8648f98b-dtbr4\" (UID: \"2ba38124-9926-44fd-b5c5-2adb47fd814a\") " pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.166717 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2ba38124-9926-44fd-b5c5-2adb47fd814a-cert\") pod \"controller-f8648f98b-dtbr4\" (UID: \"2ba38124-9926-44fd-b5c5-2adb47fd814a\") " pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.168169 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e7177936-18a2-4469-bf7b-cd9db745d93f-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-kcwjn\" (UID: \"e7177936-18a2-4469-bf7b-cd9db745d93f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.168329 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6e123ec9-00ea-466d-b5f6-79cad587a2cc-metrics-certs\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.178609 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5fqn\" (UniqueName: \"kubernetes.io/projected/6e123ec9-00ea-466d-b5f6-79cad587a2cc-kube-api-access-p5fqn\") pod \"frr-k8s-hq5kf\" (UID: \"6e123ec9-00ea-466d-b5f6-79cad587a2cc\") " pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.180038 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsqkq\" (UniqueName: \"kubernetes.io/projected/e7177936-18a2-4469-bf7b-cd9db745d93f-kube-api-access-bsqkq\") pod \"frr-k8s-webhook-server-7fcb986d4-kcwjn\" (UID: \"e7177936-18a2-4469-bf7b-cd9db745d93f\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.181157 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtjt7\" (UniqueName: \"kubernetes.io/projected/2ba38124-9926-44fd-b5c5-2adb47fd814a-kube-api-access-xtjt7\") pod \"controller-f8648f98b-dtbr4\" (UID: \"2ba38124-9926-44fd-b5c5-2adb47fd814a\") " pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.196216 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqsz6\" (UniqueName: \"kubernetes.io/projected/8a0660c9-5ef5-4ed7-a304-3690e32fb830-kube-api-access-cqsz6\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.252873 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.261655 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.826455 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.828635 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-memberlist\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.828714 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-metrics-certs\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: E1209 08:59:37.829558 4786 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 09 08:59:37 crc kubenswrapper[4786]: E1209 08:59:37.830208 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-memberlist podName:8a0660c9-5ef5-4ed7-a304-3690e32fb830 nodeName:}" failed. No retries permitted until 2025-12-09 08:59:38.830183608 +0000 UTC m=+944.713804834 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-memberlist") pod "speaker-fpvm7" (UID: "8a0660c9-5ef5-4ed7-a304-3690e32fb830") : secret "metallb-memberlist" not found Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.837179 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-metrics-certs\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:37 crc kubenswrapper[4786]: I1209 08:59:37.944434 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerStarted","Data":"825e00348da2a23e71563bdd42e534340d76491a20f6e6783df2467872a982b9"} Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.037262 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn"] Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.117167 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-dtbr4"] Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.882551 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-memberlist\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.894239 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/8a0660c9-5ef5-4ed7-a304-3690e32fb830-memberlist\") pod \"speaker-fpvm7\" (UID: \"8a0660c9-5ef5-4ed7-a304-3690e32fb830\") " pod="metallb-system/speaker-fpvm7" Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.951830 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" event={"ID":"e7177936-18a2-4469-bf7b-cd9db745d93f","Type":"ContainerStarted","Data":"ad28a4a43907b0508debcefe8b3a2482f90efde1ea32fc696709a32824997f8f"} Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.953391 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-dtbr4" event={"ID":"2ba38124-9926-44fd-b5c5-2adb47fd814a","Type":"ContainerStarted","Data":"0bc694f441ac5c41e1d81ddd57eeb77bf604228529685176d859d05cb6719f42"} Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.953433 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-dtbr4" event={"ID":"2ba38124-9926-44fd-b5c5-2adb47fd814a","Type":"ContainerStarted","Data":"8abcbbf611a6bea771c189aaa520d2f97abc50be9fa61000700e3869d3f1194e"} Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.953444 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-dtbr4" event={"ID":"2ba38124-9926-44fd-b5c5-2adb47fd814a","Type":"ContainerStarted","Data":"a095076f0df9dd8b34224a72e86d7142c6b2bb9354107b6871171407f7274546"} Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.954299 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:38 crc kubenswrapper[4786]: I1209 08:59:38.974192 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-dtbr4" podStartSLOduration=1.974175293 podStartE2EDuration="1.974175293s" podCreationTimestamp="2025-12-09 08:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:59:38.972992024 +0000 UTC m=+944.856613250" watchObservedRunningTime="2025-12-09 08:59:38.974175293 +0000 UTC m=+944.857796519" Dec 09 08:59:39 crc kubenswrapper[4786]: I1209 08:59:39.157541 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-fpvm7" Dec 09 08:59:39 crc kubenswrapper[4786]: W1209 08:59:39.185291 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a0660c9_5ef5_4ed7_a304_3690e32fb830.slice/crio-9751800effa4dfb4b35286e2d5dcaae1518e116845dd0d349beea99cdfd31be0 WatchSource:0}: Error finding container 9751800effa4dfb4b35286e2d5dcaae1518e116845dd0d349beea99cdfd31be0: Status 404 returned error can't find the container with id 9751800effa4dfb4b35286e2d5dcaae1518e116845dd0d349beea99cdfd31be0 Dec 09 08:59:40 crc kubenswrapper[4786]: I1209 08:59:40.061294 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fpvm7" event={"ID":"8a0660c9-5ef5-4ed7-a304-3690e32fb830","Type":"ContainerStarted","Data":"453d5179761e43b8446a604858d58f79d921631e5408f876496f9745ac547896"} Dec 09 08:59:40 crc kubenswrapper[4786]: I1209 08:59:40.061357 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fpvm7" event={"ID":"8a0660c9-5ef5-4ed7-a304-3690e32fb830","Type":"ContainerStarted","Data":"148e84c280dc170c486e8c8cafc3a0428077b229b6faa91b1c3f28dd5192a06f"} Dec 09 08:59:40 crc kubenswrapper[4786]: I1209 08:59:40.061371 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-fpvm7" event={"ID":"8a0660c9-5ef5-4ed7-a304-3690e32fb830","Type":"ContainerStarted","Data":"9751800effa4dfb4b35286e2d5dcaae1518e116845dd0d349beea99cdfd31be0"} Dec 09 08:59:40 crc kubenswrapper[4786]: I1209 08:59:40.106804 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-fpvm7" podStartSLOduration=3.106761881 podStartE2EDuration="3.106761881s" podCreationTimestamp="2025-12-09 08:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 08:59:40.101821858 +0000 UTC m=+945.985443084" watchObservedRunningTime="2025-12-09 08:59:40.106761881 +0000 UTC m=+945.990383117" Dec 09 08:59:49 crc kubenswrapper[4786]: I1209 08:59:49.159437 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-fpvm7" Dec 09 08:59:49 crc kubenswrapper[4786]: I1209 08:59:49.165280 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-fpvm7" Dec 09 08:59:50 crc kubenswrapper[4786]: I1209 08:59:50.155548 4786 generic.go:334] "Generic (PLEG): container finished" podID="6e123ec9-00ea-466d-b5f6-79cad587a2cc" containerID="d10e9b4b13791843f753a20d4b9a2a1e3c36748686a5accd405c7f259f1350a4" exitCode=0 Dec 09 08:59:50 crc kubenswrapper[4786]: I1209 08:59:50.155666 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerDied","Data":"d10e9b4b13791843f753a20d4b9a2a1e3c36748686a5accd405c7f259f1350a4"} Dec 09 08:59:50 crc kubenswrapper[4786]: I1209 08:59:50.159350 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" event={"ID":"e7177936-18a2-4469-bf7b-cd9db745d93f","Type":"ContainerStarted","Data":"d3f61f76917717dd3e1ee774c6a5c8fded95cc9d51e4e6c36b9ed9125baa674e"} Dec 09 08:59:50 crc kubenswrapper[4786]: I1209 08:59:50.206722 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" podStartSLOduration=3.264305918 podStartE2EDuration="14.206679438s" podCreationTimestamp="2025-12-09 08:59:36 +0000 UTC" firstStartedPulling="2025-12-09 08:59:38.05356445 +0000 UTC m=+943.937185676" lastFinishedPulling="2025-12-09 08:59:48.99593797 +0000 UTC m=+954.879559196" observedRunningTime="2025-12-09 08:59:50.198887412 +0000 UTC m=+956.082508638" watchObservedRunningTime="2025-12-09 08:59:50.206679438 +0000 UTC m=+956.090300664" Dec 09 08:59:51 crc kubenswrapper[4786]: I1209 08:59:51.169104 4786 generic.go:334] "Generic (PLEG): container finished" podID="6e123ec9-00ea-466d-b5f6-79cad587a2cc" containerID="e5885e14ac46dc5efa5c4f1f150143c2c5bd5312541c0235d95cdc992f16644e" exitCode=0 Dec 09 08:59:51 crc kubenswrapper[4786]: I1209 08:59:51.169178 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerDied","Data":"e5885e14ac46dc5efa5c4f1f150143c2c5bd5312541c0235d95cdc992f16644e"} Dec 09 08:59:51 crc kubenswrapper[4786]: I1209 08:59:51.169362 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.184513 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-mvpqm"] Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.185867 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mvpqm" Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.189919 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.190181 4786 generic.go:334] "Generic (PLEG): container finished" podID="6e123ec9-00ea-466d-b5f6-79cad587a2cc" containerID="50d20a661fb13255e022d38c1200660edfe19922ea0be784c9c5675ded559891" exitCode=0 Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.190284 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerDied","Data":"50d20a661fb13255e022d38c1200660edfe19922ea0be784c9c5675ded559891"} Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.194461 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-lwfjf" Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.194711 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.198472 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mvpqm"] Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.300210 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22l2g\" (UniqueName: \"kubernetes.io/projected/87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d-kube-api-access-22l2g\") pod \"openstack-operator-index-mvpqm\" (UID: \"87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d\") " pod="openstack-operators/openstack-operator-index-mvpqm" Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.402053 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22l2g\" (UniqueName: \"kubernetes.io/projected/87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d-kube-api-access-22l2g\") pod \"openstack-operator-index-mvpqm\" (UID: \"87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d\") " pod="openstack-operators/openstack-operator-index-mvpqm" Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.421230 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22l2g\" (UniqueName: \"kubernetes.io/projected/87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d-kube-api-access-22l2g\") pod \"openstack-operator-index-mvpqm\" (UID: \"87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d\") " pod="openstack-operators/openstack-operator-index-mvpqm" Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.503898 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mvpqm" Dec 09 08:59:52 crc kubenswrapper[4786]: I1209 08:59:52.786177 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-mvpqm"] Dec 09 08:59:52 crc kubenswrapper[4786]: W1209 08:59:52.798635 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87d4e6d6_35c4_4fc5_985f_82ec89c3bb4d.slice/crio-83b2e4d4fc9011ec1d9b08481eac7d39b7368393b8ee31314bd25b6f9d31db24 WatchSource:0}: Error finding container 83b2e4d4fc9011ec1d9b08481eac7d39b7368393b8ee31314bd25b6f9d31db24: Status 404 returned error can't find the container with id 83b2e4d4fc9011ec1d9b08481eac7d39b7368393b8ee31314bd25b6f9d31db24 Dec 09 08:59:53 crc kubenswrapper[4786]: I1209 08:59:53.198029 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mvpqm" event={"ID":"87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d","Type":"ContainerStarted","Data":"83b2e4d4fc9011ec1d9b08481eac7d39b7368393b8ee31314bd25b6f9d31db24"} Dec 09 08:59:53 crc kubenswrapper[4786]: I1209 08:59:53.202409 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerStarted","Data":"d3fa1af67f4e155d8ceaa523be304a42c39127c0f638f89fb9ab967bfe1b89cc"} Dec 09 08:59:53 crc kubenswrapper[4786]: I1209 08:59:53.202492 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerStarted","Data":"5b0e7ce8da76ac3ed50496b0d49c6fa9e53a692e71f0d0853135f9ca168733ef"} Dec 09 08:59:55 crc kubenswrapper[4786]: I1209 08:59:55.246071 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerStarted","Data":"39ed79943c7764d0b8c6e62dcb68effdf588e48089999d904168c0e50d1889f1"} Dec 09 08:59:55 crc kubenswrapper[4786]: I1209 08:59:55.247495 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerStarted","Data":"f1b689f464c99904b6d226f715483ca389dc27d3f1561955b9981ad1528fddbe"} Dec 09 08:59:55 crc kubenswrapper[4786]: I1209 08:59:55.247632 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerStarted","Data":"ec095c18923179705cad1f63d38757f5a554de9b9ea8fa81e745d7cb0ff6a860"} Dec 09 08:59:55 crc kubenswrapper[4786]: I1209 08:59:55.554039 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mvpqm"] Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.158571 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-p2plv"] Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.160788 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p2plv" Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.166859 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p2plv"] Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.262071 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-hq5kf" event={"ID":"6e123ec9-00ea-466d-b5f6-79cad587a2cc","Type":"ContainerStarted","Data":"5072199a1ec3a9ccaa4e2fdbbc926a7d539742fffc48528fd7582c7798acb739"} Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.263495 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.288885 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kztd\" (UniqueName: \"kubernetes.io/projected/66100669-25e6-457a-a856-d7f6ee39b124-kube-api-access-9kztd\") pod \"openstack-operator-index-p2plv\" (UID: \"66100669-25e6-457a-a856-d7f6ee39b124\") " pod="openstack-operators/openstack-operator-index-p2plv" Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.292066 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-hq5kf" podStartSLOduration=9.239790418 podStartE2EDuration="20.292047823s" podCreationTimestamp="2025-12-09 08:59:36 +0000 UTC" firstStartedPulling="2025-12-09 08:59:37.932131705 +0000 UTC m=+943.815752931" lastFinishedPulling="2025-12-09 08:59:48.98438911 +0000 UTC m=+954.868010336" observedRunningTime="2025-12-09 08:59:56.28714098 +0000 UTC m=+962.170762216" watchObservedRunningTime="2025-12-09 08:59:56.292047823 +0000 UTC m=+962.175669049" Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.392132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kztd\" (UniqueName: \"kubernetes.io/projected/66100669-25e6-457a-a856-d7f6ee39b124-kube-api-access-9kztd\") pod \"openstack-operator-index-p2plv\" (UID: \"66100669-25e6-457a-a856-d7f6ee39b124\") " pod="openstack-operators/openstack-operator-index-p2plv" Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.412273 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kztd\" (UniqueName: \"kubernetes.io/projected/66100669-25e6-457a-a856-d7f6ee39b124-kube-api-access-9kztd\") pod \"openstack-operator-index-p2plv\" (UID: \"66100669-25e6-457a-a856-d7f6ee39b124\") " pod="openstack-operators/openstack-operator-index-p2plv" Dec 09 08:59:56 crc kubenswrapper[4786]: I1209 08:59:56.485834 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-p2plv" Dec 09 08:59:57 crc kubenswrapper[4786]: I1209 08:59:57.253412 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:57 crc kubenswrapper[4786]: I1209 08:59:57.313330 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-hq5kf" Dec 09 08:59:57 crc kubenswrapper[4786]: I1209 08:59:57.833769 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-dtbr4" Dec 09 08:59:58 crc kubenswrapper[4786]: I1209 08:59:58.293020 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-p2plv"] Dec 09 08:59:58 crc kubenswrapper[4786]: W1209 08:59:58.982735 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66100669_25e6_457a_a856_d7f6ee39b124.slice/crio-95b1ece9032bf9830e3a6b894194f301ea6af51b06905f1eaad90e2bdf19d52f WatchSource:0}: Error finding container 95b1ece9032bf9830e3a6b894194f301ea6af51b06905f1eaad90e2bdf19d52f: Status 404 returned error can't find the container with id 95b1ece9032bf9830e3a6b894194f301ea6af51b06905f1eaad90e2bdf19d52f Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.286324 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mvpqm" event={"ID":"87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d","Type":"ContainerStarted","Data":"e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb"} Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.286526 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-mvpqm" podUID="87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d" containerName="registry-server" containerID="cri-o://e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb" gracePeriod=2 Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.290693 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p2plv" event={"ID":"66100669-25e6-457a-a856-d7f6ee39b124","Type":"ContainerStarted","Data":"03398af679e03416f5907ef7dcaf5200c0167f55c2038f2d1a78eb747a05e288"} Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.290722 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-p2plv" event={"ID":"66100669-25e6-457a-a856-d7f6ee39b124","Type":"ContainerStarted","Data":"95b1ece9032bf9830e3a6b894194f301ea6af51b06905f1eaad90e2bdf19d52f"} Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.309637 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-mvpqm" podStartSLOduration=1.084892276 podStartE2EDuration="7.309615645s" podCreationTimestamp="2025-12-09 08:59:52 +0000 UTC" firstStartedPulling="2025-12-09 08:59:52.801046519 +0000 UTC m=+958.684667745" lastFinishedPulling="2025-12-09 08:59:59.025769888 +0000 UTC m=+964.909391114" observedRunningTime="2025-12-09 08:59:59.307640096 +0000 UTC m=+965.191261332" watchObservedRunningTime="2025-12-09 08:59:59.309615645 +0000 UTC m=+965.193236871" Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.323489 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-p2plv" podStartSLOduration=3.248984856 podStartE2EDuration="3.323471233s" podCreationTimestamp="2025-12-09 08:59:56 +0000 UTC" firstStartedPulling="2025-12-09 08:59:59.006436984 +0000 UTC m=+964.890058220" lastFinishedPulling="2025-12-09 08:59:59.080923351 +0000 UTC m=+964.964544597" observedRunningTime="2025-12-09 08:59:59.321050052 +0000 UTC m=+965.204671278" watchObservedRunningTime="2025-12-09 08:59:59.323471233 +0000 UTC m=+965.207092459" Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.737879 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mvpqm_87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d/registry-server/0.log" Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.738366 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mvpqm" Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.853109 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22l2g\" (UniqueName: \"kubernetes.io/projected/87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d-kube-api-access-22l2g\") pod \"87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d\" (UID: \"87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d\") " Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.861710 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d-kube-api-access-22l2g" (OuterVolumeSpecName: "kube-api-access-22l2g") pod "87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d" (UID: "87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d"). InnerVolumeSpecName "kube-api-access-22l2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 08:59:59 crc kubenswrapper[4786]: I1209 08:59:59.954815 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22l2g\" (UniqueName: \"kubernetes.io/projected/87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d-kube-api-access-22l2g\") on node \"crc\" DevicePath \"\"" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.164202 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw"] Dec 09 09:00:00 crc kubenswrapper[4786]: E1209 09:00:00.164566 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d" containerName="registry-server" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.164623 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d" containerName="registry-server" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.164803 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d" containerName="registry-server" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.165375 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.172364 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.173242 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.181160 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw"] Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.259479 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c3a02ac-5823-4ccc-a9e2-47485a972d77-config-volume\") pod \"collect-profiles-29421180-f7qgw\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.259627 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsnbn\" (UniqueName: \"kubernetes.io/projected/2c3a02ac-5823-4ccc-a9e2-47485a972d77-kube-api-access-jsnbn\") pod \"collect-profiles-29421180-f7qgw\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.259664 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c3a02ac-5823-4ccc-a9e2-47485a972d77-secret-volume\") pod \"collect-profiles-29421180-f7qgw\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.297921 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-mvpqm_87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d/registry-server/0.log" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.297981 4786 generic.go:334] "Generic (PLEG): container finished" podID="87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d" containerID="e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb" exitCode=2 Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.298066 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-mvpqm" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.298058 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mvpqm" event={"ID":"87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d","Type":"ContainerDied","Data":"e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb"} Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.298149 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-mvpqm" event={"ID":"87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d","Type":"ContainerDied","Data":"83b2e4d4fc9011ec1d9b08481eac7d39b7368393b8ee31314bd25b6f9d31db24"} Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.298172 4786 scope.go:117] "RemoveContainer" containerID="e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.316816 4786 scope.go:117] "RemoveContainer" containerID="e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb" Dec 09 09:00:00 crc kubenswrapper[4786]: E1209 09:00:00.317413 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb\": container with ID starting with e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb not found: ID does not exist" containerID="e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.317475 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb"} err="failed to get container status \"e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb\": rpc error: code = NotFound desc = could not find container \"e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb\": container with ID starting with e476aeec21d0f870f1eabb6189184b5c0882775c03174211bf6a299e58ff00eb not found: ID does not exist" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.329276 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-mvpqm"] Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.333348 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-mvpqm"] Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.361025 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsnbn\" (UniqueName: \"kubernetes.io/projected/2c3a02ac-5823-4ccc-a9e2-47485a972d77-kube-api-access-jsnbn\") pod \"collect-profiles-29421180-f7qgw\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.361084 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c3a02ac-5823-4ccc-a9e2-47485a972d77-secret-volume\") pod \"collect-profiles-29421180-f7qgw\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.361163 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c3a02ac-5823-4ccc-a9e2-47485a972d77-config-volume\") pod \"collect-profiles-29421180-f7qgw\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.362957 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c3a02ac-5823-4ccc-a9e2-47485a972d77-config-volume\") pod \"collect-profiles-29421180-f7qgw\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.367586 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c3a02ac-5823-4ccc-a9e2-47485a972d77-secret-volume\") pod \"collect-profiles-29421180-f7qgw\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.380605 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsnbn\" (UniqueName: \"kubernetes.io/projected/2c3a02ac-5823-4ccc-a9e2-47485a972d77-kube-api-access-jsnbn\") pod \"collect-profiles-29421180-f7qgw\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.485506 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:00 crc kubenswrapper[4786]: I1209 09:00:00.722626 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw"] Dec 09 09:00:01 crc kubenswrapper[4786]: I1209 09:00:01.196508 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d" path="/var/lib/kubelet/pods/87d4e6d6-35c4-4fc5-985f-82ec89c3bb4d/volumes" Dec 09 09:00:01 crc kubenswrapper[4786]: I1209 09:00:01.312075 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" event={"ID":"2c3a02ac-5823-4ccc-a9e2-47485a972d77","Type":"ContainerStarted","Data":"1392f76059e3d1028df92301bee2095fcb686153c145011c253c13882c529417"} Dec 09 09:00:01 crc kubenswrapper[4786]: I1209 09:00:01.312138 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" event={"ID":"2c3a02ac-5823-4ccc-a9e2-47485a972d77","Type":"ContainerStarted","Data":"b6a5b413b90b09c0aaa324f8cd4cf2bebad125379fd31608e680d1e1dafd3ba2"} Dec 09 09:00:01 crc kubenswrapper[4786]: I1209 09:00:01.336771 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" podStartSLOduration=1.3367513039999999 podStartE2EDuration="1.336751304s" podCreationTimestamp="2025-12-09 09:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:00:01.331758048 +0000 UTC m=+967.215379294" watchObservedRunningTime="2025-12-09 09:00:01.336751304 +0000 UTC m=+967.220372530" Dec 09 09:00:02 crc kubenswrapper[4786]: I1209 09:00:02.325507 4786 generic.go:334] "Generic (PLEG): container finished" podID="2c3a02ac-5823-4ccc-a9e2-47485a972d77" containerID="1392f76059e3d1028df92301bee2095fcb686153c145011c253c13882c529417" exitCode=0 Dec 09 09:00:02 crc kubenswrapper[4786]: I1209 09:00:02.325634 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" event={"ID":"2c3a02ac-5823-4ccc-a9e2-47485a972d77","Type":"ContainerDied","Data":"1392f76059e3d1028df92301bee2095fcb686153c145011c253c13882c529417"} Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.785348 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.834470 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c3a02ac-5823-4ccc-a9e2-47485a972d77-config-volume\") pod \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.834563 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsnbn\" (UniqueName: \"kubernetes.io/projected/2c3a02ac-5823-4ccc-a9e2-47485a972d77-kube-api-access-jsnbn\") pod \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.834637 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c3a02ac-5823-4ccc-a9e2-47485a972d77-secret-volume\") pod \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\" (UID: \"2c3a02ac-5823-4ccc-a9e2-47485a972d77\") " Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.835264 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c3a02ac-5823-4ccc-a9e2-47485a972d77-config-volume" (OuterVolumeSpecName: "config-volume") pod "2c3a02ac-5823-4ccc-a9e2-47485a972d77" (UID: "2c3a02ac-5823-4ccc-a9e2-47485a972d77"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.840056 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c3a02ac-5823-4ccc-a9e2-47485a972d77-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2c3a02ac-5823-4ccc-a9e2-47485a972d77" (UID: "2c3a02ac-5823-4ccc-a9e2-47485a972d77"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.840121 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c3a02ac-5823-4ccc-a9e2-47485a972d77-kube-api-access-jsnbn" (OuterVolumeSpecName: "kube-api-access-jsnbn") pod "2c3a02ac-5823-4ccc-a9e2-47485a972d77" (UID: "2c3a02ac-5823-4ccc-a9e2-47485a972d77"). InnerVolumeSpecName "kube-api-access-jsnbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.935781 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2c3a02ac-5823-4ccc-a9e2-47485a972d77-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.935820 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsnbn\" (UniqueName: \"kubernetes.io/projected/2c3a02ac-5823-4ccc-a9e2-47485a972d77-kube-api-access-jsnbn\") on node \"crc\" DevicePath \"\"" Dec 09 09:00:03 crc kubenswrapper[4786]: I1209 09:00:03.935831 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2c3a02ac-5823-4ccc-a9e2-47485a972d77-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 09:00:04 crc kubenswrapper[4786]: I1209 09:00:04.340812 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" event={"ID":"2c3a02ac-5823-4ccc-a9e2-47485a972d77","Type":"ContainerDied","Data":"b6a5b413b90b09c0aaa324f8cd4cf2bebad125379fd31608e680d1e1dafd3ba2"} Dec 09 09:00:04 crc kubenswrapper[4786]: I1209 09:00:04.340857 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6a5b413b90b09c0aaa324f8cd4cf2bebad125379fd31608e680d1e1dafd3ba2" Dec 09 09:00:04 crc kubenswrapper[4786]: I1209 09:00:04.340881 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw" Dec 09 09:00:06 crc kubenswrapper[4786]: I1209 09:00:06.486650 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-p2plv" Dec 09 09:00:06 crc kubenswrapper[4786]: I1209 09:00:06.487691 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-p2plv" Dec 09 09:00:06 crc kubenswrapper[4786]: I1209 09:00:06.522831 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-p2plv" Dec 09 09:00:07 crc kubenswrapper[4786]: I1209 09:00:07.257307 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-hq5kf" Dec 09 09:00:07 crc kubenswrapper[4786]: I1209 09:00:07.265952 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-kcwjn" Dec 09 09:00:07 crc kubenswrapper[4786]: I1209 09:00:07.387603 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-p2plv" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.802661 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp"] Dec 09 09:00:12 crc kubenswrapper[4786]: E1209 09:00:12.804620 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c3a02ac-5823-4ccc-a9e2-47485a972d77" containerName="collect-profiles" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.804744 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c3a02ac-5823-4ccc-a9e2-47485a972d77" containerName="collect-profiles" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.805021 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c3a02ac-5823-4ccc-a9e2-47485a972d77" containerName="collect-profiles" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.806212 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.817189 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp"] Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.820005 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-spnnt" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.880906 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7x4h\" (UniqueName: \"kubernetes.io/projected/41a3cd42-650d-45c1-9664-5c5de59883ed-kube-api-access-v7x4h\") pod \"563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.881011 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-util\") pod \"563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.881061 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-bundle\") pod \"563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.982583 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-util\") pod \"563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.982668 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-bundle\") pod \"563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.982743 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7x4h\" (UniqueName: \"kubernetes.io/projected/41a3cd42-650d-45c1-9664-5c5de59883ed-kube-api-access-v7x4h\") pod \"563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.984039 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-util\") pod \"563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:12 crc kubenswrapper[4786]: I1209 09:00:12.984351 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-bundle\") pod \"563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:13 crc kubenswrapper[4786]: I1209 09:00:13.006157 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7x4h\" (UniqueName: \"kubernetes.io/projected/41a3cd42-650d-45c1-9664-5c5de59883ed-kube-api-access-v7x4h\") pod \"563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:13 crc kubenswrapper[4786]: I1209 09:00:13.124521 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:13 crc kubenswrapper[4786]: I1209 09:00:13.737072 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp"] Dec 09 09:00:14 crc kubenswrapper[4786]: I1209 09:00:14.409320 4786 generic.go:334] "Generic (PLEG): container finished" podID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerID="738287fdcd74ab16059435ca251f42a77423177444ddb423f1624fabe8c38487" exitCode=0 Dec 09 09:00:14 crc kubenswrapper[4786]: I1209 09:00:14.409497 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" event={"ID":"41a3cd42-650d-45c1-9664-5c5de59883ed","Type":"ContainerDied","Data":"738287fdcd74ab16059435ca251f42a77423177444ddb423f1624fabe8c38487"} Dec 09 09:00:14 crc kubenswrapper[4786]: I1209 09:00:14.409881 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" event={"ID":"41a3cd42-650d-45c1-9664-5c5de59883ed","Type":"ContainerStarted","Data":"8458df37943160abdb22770743d1934a7703a7f1610ca0556116c146026cf78c"} Dec 09 09:00:15 crc kubenswrapper[4786]: I1209 09:00:15.418955 4786 generic.go:334] "Generic (PLEG): container finished" podID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerID="3daaec0a1d78e4bddf46e510e323c72c3cef6428b991c3c78001a15f9f687145" exitCode=0 Dec 09 09:00:15 crc kubenswrapper[4786]: I1209 09:00:15.419004 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" event={"ID":"41a3cd42-650d-45c1-9664-5c5de59883ed","Type":"ContainerDied","Data":"3daaec0a1d78e4bddf46e510e323c72c3cef6428b991c3c78001a15f9f687145"} Dec 09 09:00:16 crc kubenswrapper[4786]: I1209 09:00:16.430367 4786 generic.go:334] "Generic (PLEG): container finished" podID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerID="5f7b7ba6e9789ce65acc462ce54ce16d763eff66f77d074d4980ad0737abae8a" exitCode=0 Dec 09 09:00:16 crc kubenswrapper[4786]: I1209 09:00:16.430519 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" event={"ID":"41a3cd42-650d-45c1-9664-5c5de59883ed","Type":"ContainerDied","Data":"5f7b7ba6e9789ce65acc462ce54ce16d763eff66f77d074d4980ad0737abae8a"} Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.735166 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.855915 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-bundle\") pod \"41a3cd42-650d-45c1-9664-5c5de59883ed\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.856051 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7x4h\" (UniqueName: \"kubernetes.io/projected/41a3cd42-650d-45c1-9664-5c5de59883ed-kube-api-access-v7x4h\") pod \"41a3cd42-650d-45c1-9664-5c5de59883ed\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.856160 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-util\") pod \"41a3cd42-650d-45c1-9664-5c5de59883ed\" (UID: \"41a3cd42-650d-45c1-9664-5c5de59883ed\") " Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.857330 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-bundle" (OuterVolumeSpecName: "bundle") pod "41a3cd42-650d-45c1-9664-5c5de59883ed" (UID: "41a3cd42-650d-45c1-9664-5c5de59883ed"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.866011 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41a3cd42-650d-45c1-9664-5c5de59883ed-kube-api-access-v7x4h" (OuterVolumeSpecName: "kube-api-access-v7x4h") pod "41a3cd42-650d-45c1-9664-5c5de59883ed" (UID: "41a3cd42-650d-45c1-9664-5c5de59883ed"). InnerVolumeSpecName "kube-api-access-v7x4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.869607 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-util" (OuterVolumeSpecName: "util") pod "41a3cd42-650d-45c1-9664-5c5de59883ed" (UID: "41a3cd42-650d-45c1-9664-5c5de59883ed"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.957924 4786 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-util\") on node \"crc\" DevicePath \"\"" Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.957958 4786 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/41a3cd42-650d-45c1-9664-5c5de59883ed-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:00:17 crc kubenswrapper[4786]: I1209 09:00:17.957967 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7x4h\" (UniqueName: \"kubernetes.io/projected/41a3cd42-650d-45c1-9664-5c5de59883ed-kube-api-access-v7x4h\") on node \"crc\" DevicePath \"\"" Dec 09 09:00:18 crc kubenswrapper[4786]: I1209 09:00:18.445369 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" event={"ID":"41a3cd42-650d-45c1-9664-5c5de59883ed","Type":"ContainerDied","Data":"8458df37943160abdb22770743d1934a7703a7f1610ca0556116c146026cf78c"} Dec 09 09:00:18 crc kubenswrapper[4786]: I1209 09:00:18.446068 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8458df37943160abdb22770743d1934a7703a7f1610ca0556116c146026cf78c" Dec 09 09:00:18 crc kubenswrapper[4786]: I1209 09:00:18.445464 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp" Dec 09 09:00:24 crc kubenswrapper[4786]: I1209 09:00:24.989782 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:00:24 crc kubenswrapper[4786]: I1209 09:00:24.990882 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.384399 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv"] Dec 09 09:00:26 crc kubenswrapper[4786]: E1209 09:00:26.384798 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerName="extract" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.384815 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerName="extract" Dec 09 09:00:26 crc kubenswrapper[4786]: E1209 09:00:26.384840 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerName="util" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.384848 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerName="util" Dec 09 09:00:26 crc kubenswrapper[4786]: E1209 09:00:26.384863 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerName="pull" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.384871 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerName="pull" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.385003 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="41a3cd42-650d-45c1-9664-5c5de59883ed" containerName="extract" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.385858 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.389154 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-dvndc" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.410552 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv"] Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.485248 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-676ll\" (UniqueName: \"kubernetes.io/projected/9415679e-cf70-4f02-aaf3-20aa363e9f86-kube-api-access-676ll\") pod \"openstack-operator-controller-operator-65cff6ddb4-mkfzv\" (UID: \"9415679e-cf70-4f02-aaf3-20aa363e9f86\") " pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.586999 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-676ll\" (UniqueName: \"kubernetes.io/projected/9415679e-cf70-4f02-aaf3-20aa363e9f86-kube-api-access-676ll\") pod \"openstack-operator-controller-operator-65cff6ddb4-mkfzv\" (UID: \"9415679e-cf70-4f02-aaf3-20aa363e9f86\") " pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.606357 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-676ll\" (UniqueName: \"kubernetes.io/projected/9415679e-cf70-4f02-aaf3-20aa363e9f86-kube-api-access-676ll\") pod \"openstack-operator-controller-operator-65cff6ddb4-mkfzv\" (UID: \"9415679e-cf70-4f02-aaf3-20aa363e9f86\") " pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" Dec 09 09:00:26 crc kubenswrapper[4786]: I1209 09:00:26.706416 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" Dec 09 09:00:27 crc kubenswrapper[4786]: I1209 09:00:27.180945 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv"] Dec 09 09:00:27 crc kubenswrapper[4786]: I1209 09:00:27.506561 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" event={"ID":"9415679e-cf70-4f02-aaf3-20aa363e9f86","Type":"ContainerStarted","Data":"eb6d272e7005215ea00bd04431aca7778a86081e6e2598e4d42d94c509b04c05"} Dec 09 09:00:34 crc kubenswrapper[4786]: I1209 09:00:34.231236 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:00:34 crc kubenswrapper[4786]: I1209 09:00:34.630530 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" event={"ID":"9415679e-cf70-4f02-aaf3-20aa363e9f86","Type":"ContainerStarted","Data":"d9460c0a8749ed3cf104c73364a8ffdd17a9353be6f2544881c166be5ec8c7f3"} Dec 09 09:00:37 crc kubenswrapper[4786]: I1209 09:00:37.702889 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" event={"ID":"9415679e-cf70-4f02-aaf3-20aa363e9f86","Type":"ContainerStarted","Data":"cb2b96fe3fcdbeb5316e709da65bfb38535083cf47b301400d75f1d334604c8a"} Dec 09 09:00:37 crc kubenswrapper[4786]: I1209 09:00:37.704264 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" Dec 09 09:00:39 crc kubenswrapper[4786]: I1209 09:00:39.719229 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" Dec 09 09:00:39 crc kubenswrapper[4786]: I1209 09:00:39.751606 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-65cff6ddb4-mkfzv" podStartSLOduration=3.992590495 podStartE2EDuration="13.751582643s" podCreationTimestamp="2025-12-09 09:00:26 +0000 UTC" firstStartedPulling="2025-12-09 09:00:27.185611453 +0000 UTC m=+993.069232689" lastFinishedPulling="2025-12-09 09:00:36.944603611 +0000 UTC m=+1002.828224837" observedRunningTime="2025-12-09 09:00:37.752530729 +0000 UTC m=+1003.636151985" watchObservedRunningTime="2025-12-09 09:00:39.751582643 +0000 UTC m=+1005.635203889" Dec 09 09:00:54 crc kubenswrapper[4786]: I1209 09:00:54.989548 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:00:54 crc kubenswrapper[4786]: I1209 09:00:54.990269 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.042264 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.043801 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.047403 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-7w8zm" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.048033 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.049232 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.051852 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-krfxs" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.066263 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.071689 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.118579 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.119900 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.122504 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-52msh" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.132539 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.134574 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.143219 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-lhz9k" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.143729 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9sn\" (UniqueName: \"kubernetes.io/projected/405fe0da-3e24-42cd-b73d-9d0cfe700614-kube-api-access-2h9sn\") pod \"cinder-operator-controller-manager-748967c98-w7gzc\" (UID: \"405fe0da-3e24-42cd-b73d-9d0cfe700614\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.143847 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62jmk\" (UniqueName: \"kubernetes.io/projected/0ebdf904-eeaa-4d7b-8f51-10e721a91538-kube-api-access-62jmk\") pod \"barbican-operator-controller-manager-5bfbbb859d-pldv5\" (UID: \"0ebdf904-eeaa-4d7b-8f51-10e721a91538\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.147098 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.148532 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.150864 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-4qkdg" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.159622 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.178514 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.200605 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.202087 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.205207 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.211508 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-x5hk4" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.232491 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.235766 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.236926 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.242563 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.242756 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-nsv9b" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.244806 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqncr\" (UniqueName: \"kubernetes.io/projected/2bd616d0-3367-48bb-94a5-a22302102b89-kube-api-access-jqncr\") pod \"designate-operator-controller-manager-6788cc6d75-6dlzn\" (UID: \"2bd616d0-3367-48bb-94a5-a22302102b89\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.244872 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62jmk\" (UniqueName: \"kubernetes.io/projected/0ebdf904-eeaa-4d7b-8f51-10e721a91538-kube-api-access-62jmk\") pod \"barbican-operator-controller-manager-5bfbbb859d-pldv5\" (UID: \"0ebdf904-eeaa-4d7b-8f51-10e721a91538\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.244932 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4szfj\" (UniqueName: \"kubernetes.io/projected/ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736-kube-api-access-4szfj\") pod \"glance-operator-controller-manager-85fbd69fcd-q9mt5\" (UID: \"ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736\") " pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.244952 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h9sn\" (UniqueName: \"kubernetes.io/projected/405fe0da-3e24-42cd-b73d-9d0cfe700614-kube-api-access-2h9sn\") pod \"cinder-operator-controller-manager-748967c98-w7gzc\" (UID: \"405fe0da-3e24-42cd-b73d-9d0cfe700614\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.244974 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhrlp\" (UniqueName: \"kubernetes.io/projected/f52a27b2-d045-4a4b-8fe5-0160004d9a5f-kube-api-access-qhrlp\") pod \"heat-operator-controller-manager-698d6fd7d6-mft9w\" (UID: \"f52a27b2-d045-4a4b-8fe5-0160004d9a5f\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.245002 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gf45\" (UniqueName: \"kubernetes.io/projected/2ebe7b51-643e-4700-bf2f-cbe9546ae563-kube-api-access-2gf45\") pod \"horizon-operator-controller-manager-7d5d9fd47f-rphwz\" (UID: \"2ebe7b51-643e-4700-bf2f-cbe9546ae563\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.265103 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.281624 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-kk584"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.282897 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.323024 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h9sn\" (UniqueName: \"kubernetes.io/projected/405fe0da-3e24-42cd-b73d-9d0cfe700614-kube-api-access-2h9sn\") pod \"cinder-operator-controller-manager-748967c98-w7gzc\" (UID: \"405fe0da-3e24-42cd-b73d-9d0cfe700614\") " pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.329551 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62jmk\" (UniqueName: \"kubernetes.io/projected/0ebdf904-eeaa-4d7b-8f51-10e721a91538-kube-api-access-62jmk\") pod \"barbican-operator-controller-manager-5bfbbb859d-pldv5\" (UID: \"0ebdf904-eeaa-4d7b-8f51-10e721a91538\") " pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.330359 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-8gpqq" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.332286 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-kk584"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.362252 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgx79\" (UniqueName: \"kubernetes.io/projected/d658b716-de31-47c0-a352-28f6260b0144-kube-api-access-mgx79\") pod \"infra-operator-controller-manager-6c55d8d69b-w52pz\" (UID: \"d658b716-de31-47c0-a352-28f6260b0144\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.362658 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v68br\" (UniqueName: \"kubernetes.io/projected/fd1844c2-cd01-475a-b2fa-e49c9223b7b4-kube-api-access-v68br\") pod \"ironic-operator-controller-manager-54485f899-kk584\" (UID: \"fd1844c2-cd01-475a-b2fa-e49c9223b7b4\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.362711 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4szfj\" (UniqueName: \"kubernetes.io/projected/ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736-kube-api-access-4szfj\") pod \"glance-operator-controller-manager-85fbd69fcd-q9mt5\" (UID: \"ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736\") " pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.362745 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhrlp\" (UniqueName: \"kubernetes.io/projected/f52a27b2-d045-4a4b-8fe5-0160004d9a5f-kube-api-access-qhrlp\") pod \"heat-operator-controller-manager-698d6fd7d6-mft9w\" (UID: \"f52a27b2-d045-4a4b-8fe5-0160004d9a5f\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.362797 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d658b716-de31-47c0-a352-28f6260b0144-cert\") pod \"infra-operator-controller-manager-6c55d8d69b-w52pz\" (UID: \"d658b716-de31-47c0-a352-28f6260b0144\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.362830 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gf45\" (UniqueName: \"kubernetes.io/projected/2ebe7b51-643e-4700-bf2f-cbe9546ae563-kube-api-access-2gf45\") pod \"horizon-operator-controller-manager-7d5d9fd47f-rphwz\" (UID: \"2ebe7b51-643e-4700-bf2f-cbe9546ae563\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.362891 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.362906 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqncr\" (UniqueName: \"kubernetes.io/projected/2bd616d0-3367-48bb-94a5-a22302102b89-kube-api-access-jqncr\") pod \"designate-operator-controller-manager-6788cc6d75-6dlzn\" (UID: \"2bd616d0-3367-48bb-94a5-a22302102b89\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.377688 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.410077 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqncr\" (UniqueName: \"kubernetes.io/projected/2bd616d0-3367-48bb-94a5-a22302102b89-kube-api-access-jqncr\") pod \"designate-operator-controller-manager-6788cc6d75-6dlzn\" (UID: \"2bd616d0-3367-48bb-94a5-a22302102b89\") " pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.415769 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gf45\" (UniqueName: \"kubernetes.io/projected/2ebe7b51-643e-4700-bf2f-cbe9546ae563-kube-api-access-2gf45\") pod \"horizon-operator-controller-manager-7d5d9fd47f-rphwz\" (UID: \"2ebe7b51-643e-4700-bf2f-cbe9546ae563\") " pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.416003 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhrlp\" (UniqueName: \"kubernetes.io/projected/f52a27b2-d045-4a4b-8fe5-0160004d9a5f-kube-api-access-qhrlp\") pod \"heat-operator-controller-manager-698d6fd7d6-mft9w\" (UID: \"f52a27b2-d045-4a4b-8fe5-0160004d9a5f\") " pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.417076 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.420115 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.426067 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-hwxwl" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.435400 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4szfj\" (UniqueName: \"kubernetes.io/projected/ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736-kube-api-access-4szfj\") pod \"glance-operator-controller-manager-85fbd69fcd-q9mt5\" (UID: \"ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736\") " pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.436024 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.469917 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.470947 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d658b716-de31-47c0-a352-28f6260b0144-cert\") pod \"infra-operator-controller-manager-6c55d8d69b-w52pz\" (UID: \"d658b716-de31-47c0-a352-28f6260b0144\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.471005 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rm9sf\" (UniqueName: \"kubernetes.io/projected/9b7f6902-b444-48d4-b2d2-7342e62c8811-kube-api-access-rm9sf\") pod \"keystone-operator-controller-manager-79cc9d59f5-lfmfw\" (UID: \"9b7f6902-b444-48d4-b2d2-7342e62c8811\") " pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.471049 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mgx79\" (UniqueName: \"kubernetes.io/projected/d658b716-de31-47c0-a352-28f6260b0144-kube-api-access-mgx79\") pod \"infra-operator-controller-manager-6c55d8d69b-w52pz\" (UID: \"d658b716-de31-47c0-a352-28f6260b0144\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.471091 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v68br\" (UniqueName: \"kubernetes.io/projected/fd1844c2-cd01-475a-b2fa-e49c9223b7b4-kube-api-access-v68br\") pod \"ironic-operator-controller-manager-54485f899-kk584\" (UID: \"fd1844c2-cd01-475a-b2fa-e49c9223b7b4\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.483846 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d658b716-de31-47c0-a352-28f6260b0144-cert\") pod \"infra-operator-controller-manager-6c55d8d69b-w52pz\" (UID: \"d658b716-de31-47c0-a352-28f6260b0144\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.484226 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.493476 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.517541 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v68br\" (UniqueName: \"kubernetes.io/projected/fd1844c2-cd01-475a-b2fa-e49c9223b7b4-kube-api-access-v68br\") pod \"ironic-operator-controller-manager-54485f899-kk584\" (UID: \"fd1844c2-cd01-475a-b2fa-e49c9223b7b4\") " pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.532082 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.569565 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.572752 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rm9sf\" (UniqueName: \"kubernetes.io/projected/9b7f6902-b444-48d4-b2d2-7342e62c8811-kube-api-access-rm9sf\") pod \"keystone-operator-controller-manager-79cc9d59f5-lfmfw\" (UID: \"9b7f6902-b444-48d4-b2d2-7342e62c8811\") " pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.589363 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.665923 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-4f8mc" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.674627 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhj7j\" (UniqueName: \"kubernetes.io/projected/12b96437-95ee-4267-8eb2-569b9a93ef8d-kube-api-access-xhj7j\") pod \"manila-operator-controller-manager-5cbc8c7f96-9qxcn\" (UID: \"12b96437-95ee-4267-8eb2-569b9a93ef8d\") " pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.691542 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mgx79\" (UniqueName: \"kubernetes.io/projected/d658b716-de31-47c0-a352-28f6260b0144-kube-api-access-mgx79\") pod \"infra-operator-controller-manager-6c55d8d69b-w52pz\" (UID: \"d658b716-de31-47c0-a352-28f6260b0144\") " pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.692088 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.736511 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.737063 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rm9sf\" (UniqueName: \"kubernetes.io/projected/9b7f6902-b444-48d4-b2d2-7342e62c8811-kube-api-access-rm9sf\") pod \"keystone-operator-controller-manager-79cc9d59f5-lfmfw\" (UID: \"9b7f6902-b444-48d4-b2d2-7342e62c8811\") " pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.765796 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.767515 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.769540 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.776153 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rlll5" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.783720 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-58879495c-glr8m"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.784509 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhj7j\" (UniqueName: \"kubernetes.io/projected/12b96437-95ee-4267-8eb2-569b9a93ef8d-kube-api-access-xhj7j\") pod \"manila-operator-controller-manager-5cbc8c7f96-9qxcn\" (UID: \"12b96437-95ee-4267-8eb2-569b9a93ef8d\") " pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.784644 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v55jw\" (UniqueName: \"kubernetes.io/projected/061bb0fd-451d-4d15-b979-a6ea9b833fb1-kube-api-access-v55jw\") pod \"mariadb-operator-controller-manager-64d7c556cd-qqwdd\" (UID: \"061bb0fd-451d-4d15-b979-a6ea9b833fb1\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.785696 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.788033 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-r28mx" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.801919 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-58879495c-glr8m"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.830183 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.831522 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.836737 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.844504 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.863939 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.865236 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.871052 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.888954 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v55jw\" (UniqueName: \"kubernetes.io/projected/061bb0fd-451d-4d15-b979-a6ea9b833fb1-kube-api-access-v55jw\") pod \"mariadb-operator-controller-manager-64d7c556cd-qqwdd\" (UID: \"061bb0fd-451d-4d15-b979-a6ea9b833fb1\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.900572 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.920808 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.922674 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.969412 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.971069 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.989719 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f"] Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.990663 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mv7p\" (UniqueName: \"kubernetes.io/projected/e551e183-3965-40da-88e6-bbbcd6e3cbe5-kube-api-access-9mv7p\") pod \"nova-operator-controller-manager-79d658b66d-rrlrt\" (UID: \"e551e183-3965-40da-88e6-bbbcd6e3cbe5\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.990762 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtrjc\" (UniqueName: \"kubernetes.io/projected/c117d831-6ff8-4e04-833a-242c22702cc3-kube-api-access-mtrjc\") pod \"octavia-operator-controller-manager-d5fb87cb8-2ql48\" (UID: \"c117d831-6ff8-4e04-833a-242c22702cc3\") " pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" Dec 09 09:00:58 crc kubenswrapper[4786]: I1209 09:00:58.990854 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l72r\" (UniqueName: \"kubernetes.io/projected/62fae6d8-3c6a-403c-9cc6-463e41a0bbe7-kube-api-access-8l72r\") pod \"neutron-operator-controller-manager-58879495c-glr8m\" (UID: \"62fae6d8-3c6a-403c-9cc6-463e41a0bbe7\") " pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.004629 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-kfswx" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.004887 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-qtf6p" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.005372 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.005614 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-pwrjr" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.005752 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-frvhf" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.033148 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v55jw\" (UniqueName: \"kubernetes.io/projected/061bb0fd-451d-4d15-b979-a6ea9b833fb1-kube-api-access-v55jw\") pod \"mariadb-operator-controller-manager-64d7c556cd-qqwdd\" (UID: \"061bb0fd-451d-4d15-b979-a6ea9b833fb1\") " pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.037581 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.038408 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhj7j\" (UniqueName: \"kubernetes.io/projected/12b96437-95ee-4267-8eb2-569b9a93ef8d-kube-api-access-xhj7j\") pod \"manila-operator-controller-manager-5cbc8c7f96-9qxcn\" (UID: \"12b96437-95ee-4267-8eb2-569b9a93ef8d\") " pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.085937 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.091079 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.091864 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mv7p\" (UniqueName: \"kubernetes.io/projected/e551e183-3965-40da-88e6-bbbcd6e3cbe5-kube-api-access-9mv7p\") pod \"nova-operator-controller-manager-79d658b66d-rrlrt\" (UID: \"e551e183-3965-40da-88e6-bbbcd6e3cbe5\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.092822 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jghz5\" (UniqueName: \"kubernetes.io/projected/2bc2193d-47f3-470a-a773-db2124fc8351-kube-api-access-jghz5\") pod \"ovn-operator-controller-manager-5b67cfc8fb-pwfxf\" (UID: \"2bc2193d-47f3-470a-a773-db2124fc8351\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.092923 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtrjc\" (UniqueName: \"kubernetes.io/projected/c117d831-6ff8-4e04-833a-242c22702cc3-kube-api-access-mtrjc\") pod \"octavia-operator-controller-manager-d5fb87cb8-2ql48\" (UID: \"c117d831-6ff8-4e04-833a-242c22702cc3\") " pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.093032 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac272f60-c2a5-41a3-a48b-e499e7717667-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-qkq7f\" (UID: \"ac272f60-c2a5-41a3-a48b-e499e7717667\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.093146 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mwxg\" (UniqueName: \"kubernetes.io/projected/ac272f60-c2a5-41a3-a48b-e499e7717667-kube-api-access-6mwxg\") pod \"openstack-baremetal-operator-controller-manager-77868f484-qkq7f\" (UID: \"ac272f60-c2a5-41a3-a48b-e499e7717667\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.093276 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l72r\" (UniqueName: \"kubernetes.io/projected/62fae6d8-3c6a-403c-9cc6-463e41a0bbe7-kube-api-access-8l72r\") pod \"neutron-operator-controller-manager-58879495c-glr8m\" (UID: \"62fae6d8-3c6a-403c-9cc6-463e41a0bbe7\") " pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.092207 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.094825 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.102180 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.108847 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.111025 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.123957 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.134353 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-rwwbx" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.134636 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-689t7" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.148923 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-s9nsc" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.150373 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.186517 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.198309 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mwxg\" (UniqueName: \"kubernetes.io/projected/ac272f60-c2a5-41a3-a48b-e499e7717667-kube-api-access-6mwxg\") pod \"openstack-baremetal-operator-controller-manager-77868f484-qkq7f\" (UID: \"ac272f60-c2a5-41a3-a48b-e499e7717667\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.198705 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jghz5\" (UniqueName: \"kubernetes.io/projected/2bc2193d-47f3-470a-a773-db2124fc8351-kube-api-access-jghz5\") pod \"ovn-operator-controller-manager-5b67cfc8fb-pwfxf\" (UID: \"2bc2193d-47f3-470a-a773-db2124fc8351\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.198862 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac272f60-c2a5-41a3-a48b-e499e7717667-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-qkq7f\" (UID: \"ac272f60-c2a5-41a3-a48b-e499e7717667\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.199451 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw7x8\" (UniqueName: \"kubernetes.io/projected/cbea13a0-662c-4a51-9cfa-a0904713fc0f-kube-api-access-zw7x8\") pod \"placement-operator-controller-manager-867d87977b-4mg9x\" (UID: \"cbea13a0-662c-4a51-9cfa-a0904713fc0f\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.199521 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6lk\" (UniqueName: \"kubernetes.io/projected/c0263a18-de54-4c70-9ef7-508d86abed06-kube-api-access-xv6lk\") pod \"swift-operator-controller-manager-8f6687c44-lfcn5\" (UID: \"c0263a18-de54-4c70-9ef7-508d86abed06\") " pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" Dec 09 09:00:59 crc kubenswrapper[4786]: E1209 09:00:59.201416 4786 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 09:00:59 crc kubenswrapper[4786]: E1209 09:00:59.201558 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ac272f60-c2a5-41a3-a48b-e499e7717667-cert podName:ac272f60-c2a5-41a3-a48b-e499e7717667 nodeName:}" failed. No retries permitted until 2025-12-09 09:00:59.701496734 +0000 UTC m=+1025.585117960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ac272f60-c2a5-41a3-a48b-e499e7717667-cert") pod "openstack-baremetal-operator-controller-manager-77868f484-qkq7f" (UID: "ac272f60-c2a5-41a3-a48b-e499e7717667") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.239957 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtrjc\" (UniqueName: \"kubernetes.io/projected/c117d831-6ff8-4e04-833a-242c22702cc3-kube-api-access-mtrjc\") pod \"octavia-operator-controller-manager-d5fb87cb8-2ql48\" (UID: \"c117d831-6ff8-4e04-833a-242c22702cc3\") " pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.239998 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mv7p\" (UniqueName: \"kubernetes.io/projected/e551e183-3965-40da-88e6-bbbcd6e3cbe5-kube-api-access-9mv7p\") pod \"nova-operator-controller-manager-79d658b66d-rrlrt\" (UID: \"e551e183-3965-40da-88e6-bbbcd6e3cbe5\") " pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.240392 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-pc424" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.240599 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l72r\" (UniqueName: \"kubernetes.io/projected/62fae6d8-3c6a-403c-9cc6-463e41a0bbe7-kube-api-access-8l72r\") pod \"neutron-operator-controller-manager-58879495c-glr8m\" (UID: \"62fae6d8-3c6a-403c-9cc6-463e41a0bbe7\") " pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.244881 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mwxg\" (UniqueName: \"kubernetes.io/projected/ac272f60-c2a5-41a3-a48b-e499e7717667-kube-api-access-6mwxg\") pod \"openstack-baremetal-operator-controller-manager-77868f484-qkq7f\" (UID: \"ac272f60-c2a5-41a3-a48b-e499e7717667\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.280500 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.303211 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jvk4\" (UniqueName: \"kubernetes.io/projected/2db3dcee-6f5b-487e-b425-ec7be9530815-kube-api-access-4jvk4\") pod \"telemetry-operator-controller-manager-695797c565-h57cz\" (UID: \"2db3dcee-6f5b-487e-b425-ec7be9530815\") " pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.303279 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2hbh\" (UniqueName: \"kubernetes.io/projected/7229805c-3f98-437c-a3fe-b4031a2b7fa6-kube-api-access-d2hbh\") pod \"test-operator-controller-manager-bb86466d8-m6pzg\" (UID: \"7229805c-3f98-437c-a3fe-b4031a2b7fa6\") " pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.303381 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw7x8\" (UniqueName: \"kubernetes.io/projected/cbea13a0-662c-4a51-9cfa-a0904713fc0f-kube-api-access-zw7x8\") pod \"placement-operator-controller-manager-867d87977b-4mg9x\" (UID: \"cbea13a0-662c-4a51-9cfa-a0904713fc0f\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.303458 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv6lk\" (UniqueName: \"kubernetes.io/projected/c0263a18-de54-4c70-9ef7-508d86abed06-kube-api-access-xv6lk\") pod \"swift-operator-controller-manager-8f6687c44-lfcn5\" (UID: \"c0263a18-de54-4c70-9ef7-508d86abed06\") " pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.325821 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.339708 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jghz5\" (UniqueName: \"kubernetes.io/projected/2bc2193d-47f3-470a-a773-db2124fc8351-kube-api-access-jghz5\") pod \"ovn-operator-controller-manager-5b67cfc8fb-pwfxf\" (UID: \"2bc2193d-47f3-470a-a773-db2124fc8351\") " pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.363118 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw7x8\" (UniqueName: \"kubernetes.io/projected/cbea13a0-662c-4a51-9cfa-a0904713fc0f-kube-api-access-zw7x8\") pod \"placement-operator-controller-manager-867d87977b-4mg9x\" (UID: \"cbea13a0-662c-4a51-9cfa-a0904713fc0f\") " pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.363602 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv6lk\" (UniqueName: \"kubernetes.io/projected/c0263a18-de54-4c70-9ef7-508d86abed06-kube-api-access-xv6lk\") pod \"swift-operator-controller-manager-8f6687c44-lfcn5\" (UID: \"c0263a18-de54-4c70-9ef7-508d86abed06\") " pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.383655 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.385126 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.385161 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.385180 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.385192 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.386400 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.386581 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.390358 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.396858 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.396952 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-2jht9" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.405355 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jvk4\" (UniqueName: \"kubernetes.io/projected/2db3dcee-6f5b-487e-b425-ec7be9530815-kube-api-access-4jvk4\") pod \"telemetry-operator-controller-manager-695797c565-h57cz\" (UID: \"2db3dcee-6f5b-487e-b425-ec7be9530815\") " pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.405419 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2hbh\" (UniqueName: \"kubernetes.io/projected/7229805c-3f98-437c-a3fe-b4031a2b7fa6-kube-api-access-d2hbh\") pod \"test-operator-controller-manager-bb86466d8-m6pzg\" (UID: \"7229805c-3f98-437c-a3fe-b4031a2b7fa6\") " pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.408328 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-hwkqb" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.420518 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.421869 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.433041 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-nnckw" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.455603 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg"] Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.471849 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jvk4\" (UniqueName: \"kubernetes.io/projected/2db3dcee-6f5b-487e-b425-ec7be9530815-kube-api-access-4jvk4\") pod \"telemetry-operator-controller-manager-695797c565-h57cz\" (UID: \"2db3dcee-6f5b-487e-b425-ec7be9530815\") " pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.472057 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2hbh\" (UniqueName: \"kubernetes.io/projected/7229805c-3f98-437c-a3fe-b4031a2b7fa6-kube-api-access-d2hbh\") pod \"test-operator-controller-manager-bb86466d8-m6pzg\" (UID: \"7229805c-3f98-437c-a3fe-b4031a2b7fa6\") " pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.507791 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/672e5a98-1fd6-4667-9a55-6a84ea13d77c-cert\") pod \"openstack-operator-controller-manager-9969bcdf-xb28j\" (UID: \"672e5a98-1fd6-4667-9a55-6a84ea13d77c\") " pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.507881 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpvjz\" (UniqueName: \"kubernetes.io/projected/2d179ee0-ed61-44f8-80e8-622ee7ed3876-kube-api-access-xpvjz\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-744dg\" (UID: \"2d179ee0-ed61-44f8-80e8-622ee7ed3876\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.507940 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d86d\" (UniqueName: \"kubernetes.io/projected/c12be72a-ac87-4e8f-a061-b68b3f5cb115-kube-api-access-5d86d\") pod \"watcher-operator-controller-manager-5d7f5df9d6-kwmgc\" (UID: \"c12be72a-ac87-4e8f-a061-b68b3f5cb115\") " pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.507962 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjrnl\" (UniqueName: \"kubernetes.io/projected/672e5a98-1fd6-4667-9a55-6a84ea13d77c-kube-api-access-pjrnl\") pod \"openstack-operator-controller-manager-9969bcdf-xb28j\" (UID: \"672e5a98-1fd6-4667-9a55-6a84ea13d77c\") " pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.609924 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/672e5a98-1fd6-4667-9a55-6a84ea13d77c-cert\") pod \"openstack-operator-controller-manager-9969bcdf-xb28j\" (UID: \"672e5a98-1fd6-4667-9a55-6a84ea13d77c\") " pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.610550 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpvjz\" (UniqueName: \"kubernetes.io/projected/2d179ee0-ed61-44f8-80e8-622ee7ed3876-kube-api-access-xpvjz\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-744dg\" (UID: \"2d179ee0-ed61-44f8-80e8-622ee7ed3876\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.610627 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d86d\" (UniqueName: \"kubernetes.io/projected/c12be72a-ac87-4e8f-a061-b68b3f5cb115-kube-api-access-5d86d\") pod \"watcher-operator-controller-manager-5d7f5df9d6-kwmgc\" (UID: \"c12be72a-ac87-4e8f-a061-b68b3f5cb115\") " pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.610662 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjrnl\" (UniqueName: \"kubernetes.io/projected/672e5a98-1fd6-4667-9a55-6a84ea13d77c-kube-api-access-pjrnl\") pod \"openstack-operator-controller-manager-9969bcdf-xb28j\" (UID: \"672e5a98-1fd6-4667-9a55-6a84ea13d77c\") " pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:00:59 crc kubenswrapper[4786]: E1209 09:00:59.621799 4786 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 09 09:00:59 crc kubenswrapper[4786]: E1209 09:00:59.621904 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/672e5a98-1fd6-4667-9a55-6a84ea13d77c-cert podName:672e5a98-1fd6-4667-9a55-6a84ea13d77c nodeName:}" failed. No retries permitted until 2025-12-09 09:01:00.121878994 +0000 UTC m=+1026.005500220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/672e5a98-1fd6-4667-9a55-6a84ea13d77c-cert") pod "openstack-operator-controller-manager-9969bcdf-xb28j" (UID: "672e5a98-1fd6-4667-9a55-6a84ea13d77c") : secret "webhook-server-cert" not found Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.697605 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d86d\" (UniqueName: \"kubernetes.io/projected/c12be72a-ac87-4e8f-a061-b68b3f5cb115-kube-api-access-5d86d\") pod \"watcher-operator-controller-manager-5d7f5df9d6-kwmgc\" (UID: \"c12be72a-ac87-4e8f-a061-b68b3f5cb115\") " pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.703209 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjrnl\" (UniqueName: \"kubernetes.io/projected/672e5a98-1fd6-4667-9a55-6a84ea13d77c-kube-api-access-pjrnl\") pod \"openstack-operator-controller-manager-9969bcdf-xb28j\" (UID: \"672e5a98-1fd6-4667-9a55-6a84ea13d77c\") " pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.712282 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac272f60-c2a5-41a3-a48b-e499e7717667-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-qkq7f\" (UID: \"ac272f60-c2a5-41a3-a48b-e499e7717667\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.715876 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ac272f60-c2a5-41a3-a48b-e499e7717667-cert\") pod \"openstack-baremetal-operator-controller-manager-77868f484-qkq7f\" (UID: \"ac272f60-c2a5-41a3-a48b-e499e7717667\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.758855 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpvjz\" (UniqueName: \"kubernetes.io/projected/2d179ee0-ed61-44f8-80e8-622ee7ed3876-kube-api-access-xpvjz\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-744dg\" (UID: \"2d179ee0-ed61-44f8-80e8-622ee7ed3876\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.773502 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.870259 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.936277 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.953563 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:00:59 crc kubenswrapper[4786]: I1209 09:00:59.985153 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" Dec 09 09:01:00 crc kubenswrapper[4786]: I1209 09:01:00.084856 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" Dec 09 09:01:00 crc kubenswrapper[4786]: I1209 09:01:00.101097 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" Dec 09 09:01:00 crc kubenswrapper[4786]: I1209 09:01:00.111858 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" Dec 09 09:01:00 crc kubenswrapper[4786]: I1209 09:01:00.126278 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" Dec 09 09:01:00 crc kubenswrapper[4786]: I1209 09:01:00.146516 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/672e5a98-1fd6-4667-9a55-6a84ea13d77c-cert\") pod \"openstack-operator-controller-manager-9969bcdf-xb28j\" (UID: \"672e5a98-1fd6-4667-9a55-6a84ea13d77c\") " pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:01:00 crc kubenswrapper[4786]: I1209 09:01:00.149251 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" Dec 09 09:01:00 crc kubenswrapper[4786]: I1209 09:01:00.155946 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/672e5a98-1fd6-4667-9a55-6a84ea13d77c-cert\") pod \"openstack-operator-controller-manager-9969bcdf-xb28j\" (UID: \"672e5a98-1fd6-4667-9a55-6a84ea13d77c\") " pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:01:00 crc kubenswrapper[4786]: I1209 09:01:00.160932 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" Dec 09 09:01:00 crc kubenswrapper[4786]: I1209 09:01:00.438464 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:01:01 crc kubenswrapper[4786]: I1209 09:01:01.283739 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc"] Dec 09 09:01:01 crc kubenswrapper[4786]: I1209 09:01:01.353091 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz"] Dec 09 09:01:01 crc kubenswrapper[4786]: I1209 09:01:01.376281 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.100406 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.128799 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" event={"ID":"ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736","Type":"ContainerStarted","Data":"31708602db5f0631e20fe820175de155d2b5a57eafe4b85f05f21b702a374ab0"} Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.129321 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.132802 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" event={"ID":"d658b716-de31-47c0-a352-28f6260b0144","Type":"ContainerStarted","Data":"75965870db12e6b9cd85ce3e623de78b1e51e58b54c8a42eabb3a2e4be306eea"} Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.138528 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" event={"ID":"405fe0da-3e24-42cd-b73d-9d0cfe700614","Type":"ContainerStarted","Data":"64bea3ab2335f33b5241e024e19deef44495780be8663983d4d87ea5b8c65c35"} Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.139342 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.147826 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" event={"ID":"0ebdf904-eeaa-4d7b-8f51-10e721a91538","Type":"ContainerStarted","Data":"b56ea93673a3a0f26ad760c74578de3ad53d8636a4850a814696c5a9b4d667d9"} Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.148793 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-54485f899-kk584"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.154943 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.162269 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.168235 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.378041 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.388214 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.391586 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.399974 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.413274 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.419663 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.430712 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.435370 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.444329 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.448990 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.454580 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-58879495c-glr8m"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.459208 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg"] Dec 09 09:01:02 crc kubenswrapper[4786]: I1209 09:01:02.463125 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f"] Dec 09 09:01:02 crc kubenswrapper[4786]: W1209 09:01:02.653403 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf52a27b2_d045_4a4b_8fe5_0160004d9a5f.slice/crio-c861fa51fbed6ffd3432ce2e2c0779e290519bd840284a6c0bbc15462bdebf9f WatchSource:0}: Error finding container c861fa51fbed6ffd3432ce2e2c0779e290519bd840284a6c0bbc15462bdebf9f: Status 404 returned error can't find the container with id c861fa51fbed6ffd3432ce2e2c0779e290519bd840284a6c0bbc15462bdebf9f Dec 09 09:01:02 crc kubenswrapper[4786]: W1209 09:01:02.654799 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9b7f6902_b444_48d4_b2d2_7342e62c8811.slice/crio-780146b54c7851094274c3582e6e69bad3b77a419fdf4bb80df900371fc165ed WatchSource:0}: Error finding container 780146b54c7851094274c3582e6e69bad3b77a419fdf4bb80df900371fc165ed: Status 404 returned error can't find the container with id 780146b54c7851094274c3582e6e69bad3b77a419fdf4bb80df900371fc165ed Dec 09 09:01:02 crc kubenswrapper[4786]: W1209 09:01:02.659079 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ebe7b51_643e_4700_bf2f_cbe9546ae563.slice/crio-aea624f5f6e0c5bda3a20bba6199b65d259e8c07859d436a60e30e827c95ba57 WatchSource:0}: Error finding container aea624f5f6e0c5bda3a20bba6199b65d259e8c07859d436a60e30e827c95ba57: Status 404 returned error can't find the container with id aea624f5f6e0c5bda3a20bba6199b65d259e8c07859d436a60e30e827c95ba57 Dec 09 09:01:02 crc kubenswrapper[4786]: W1209 09:01:02.665058 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bd616d0_3367_48bb_94a5_a22302102b89.slice/crio-6d7f4c30a2273107c532988a728c7cd5dac2ec47bb694ca753eaf62110026a50 WatchSource:0}: Error finding container 6d7f4c30a2273107c532988a728c7cd5dac2ec47bb694ca753eaf62110026a50: Status 404 returned error can't find the container with id 6d7f4c30a2273107c532988a728c7cd5dac2ec47bb694ca753eaf62110026a50 Dec 09 09:01:02 crc kubenswrapper[4786]: E1209 09:01:02.688340 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:debe5d6d29a007374b270b0e114e69b2136eee61dabab8576baf4010c951edb9,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9mv7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-79d658b66d-rrlrt_openstack-operators(e551e183-3965-40da-88e6-bbbcd6e3cbe5): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 09:01:02 crc kubenswrapper[4786]: E1209 09:01:02.689908 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv6lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-8f6687c44-lfcn5_openstack-operators(c0263a18-de54-4c70-9ef7-508d86abed06): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 09:01:02 crc kubenswrapper[4786]: E1209 09:01:02.690931 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55jw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-64d7c556cd-qqwdd_openstack-operators(061bb0fd-451d-4d15-b979-a6ea9b833fb1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 09:01:02 crc kubenswrapper[4786]: E1209 09:01:02.700786 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xpvjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-744dg_openstack-operators(2d179ee0-ed61-44f8-80e8-622ee7ed3876): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 09:01:02 crc kubenswrapper[4786]: E1209 09:01:02.704382 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" podUID="2d179ee0-ed61-44f8-80e8-622ee7ed3876" Dec 09 09:01:02 crc kubenswrapper[4786]: W1209 09:01:02.712696 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc12be72a_ac87_4e8f_a061_b68b3f5cb115.slice/crio-38ef6f70fa7dc46ef48b69050aaa5038671baaa069904f60e647e895f628b9e7 WatchSource:0}: Error finding container 38ef6f70fa7dc46ef48b69050aaa5038671baaa069904f60e647e895f628b9e7: Status 404 returned error can't find the container with id 38ef6f70fa7dc46ef48b69050aaa5038671baaa069904f60e647e895f628b9e7 Dec 09 09:01:02 crc kubenswrapper[4786]: E1209 09:01:02.715153 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:1463c43243c75f56609cbae6bee2f86d411107181775721cb097cbd22fcae1d1,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8l72r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-58879495c-glr8m_openstack-operators(62fae6d8-3c6a-403c-9cc6-463e41a0bbe7): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 09:01:02 crc kubenswrapper[4786]: E1209 09:01:02.715719 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jghz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-5b67cfc8fb-pwfxf_openstack-operators(2bc2193d-47f3-470a-a773-db2124fc8351): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 09:01:02 crc kubenswrapper[4786]: E1209 09:01:02.715754 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:66928f0eae5206f671ac7b21f79953e37009c54187d768dc6e03fe0a3d202b3b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_API_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CLOUDKITTY_PROC_IMAGE_URL_DEFAULT,Value:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-processor:current,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6mwxg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-77868f484-qkq7f_openstack-operators(ac272f60-c2a5-41a3-a48b-e499e7717667): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 09:01:02 crc kubenswrapper[4786]: E1209 09:01:02.753332 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.200:5001/openstack-k8s-operators/watcher-operator:8bc1d666785d4af8d6dd7e98e7f4704a89f18bd4,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5d86d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5d7f5df9d6-kwmgc_openstack-operators(c12be72a-ac87-4e8f-a061-b68b3f5cb115): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.090327 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" podUID="061bb0fd-451d-4d15-b979-a6ea9b833fb1" Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.090605 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" podUID="e551e183-3965-40da-88e6-bbbcd6e3cbe5" Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.094310 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" podUID="c0263a18-de54-4c70-9ef7-508d86abed06" Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.136716 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" podUID="2bc2193d-47f3-470a-a773-db2124fc8351" Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.156196 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" event={"ID":"7229805c-3f98-437c-a3fe-b4031a2b7fa6","Type":"ContainerStarted","Data":"629b6b0d413d30b196c644091ef2e6b57e4643f1ea87493440e0f4933bbb9000"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.158316 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" event={"ID":"c0263a18-de54-4c70-9ef7-508d86abed06","Type":"ContainerStarted","Data":"f96971ac1e3f5daf8dbc8a2a42bea7d53931adbfbcf6ce1b06a80b3af7585fc9"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.158347 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" event={"ID":"c0263a18-de54-4c70-9ef7-508d86abed06","Type":"ContainerStarted","Data":"26d8184e4bf4252d9de16b3d43a2d6c6e56534fe0be2ed0f0d0629ae33c39a31"} Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.160532 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907\\\"\"" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" podUID="c0263a18-de54-4c70-9ef7-508d86abed06" Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.161234 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" event={"ID":"2bd616d0-3367-48bb-94a5-a22302102b89","Type":"ContainerStarted","Data":"6d7f4c30a2273107c532988a728c7cd5dac2ec47bb694ca753eaf62110026a50"} Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.161344 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" podUID="c12be72a-ac87-4e8f-a061-b68b3f5cb115" Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.162819 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" podUID="ac272f60-c2a5-41a3-a48b-e499e7717667" Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.163296 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" event={"ID":"2ebe7b51-643e-4700-bf2f-cbe9546ae563","Type":"ContainerStarted","Data":"aea624f5f6e0c5bda3a20bba6199b65d259e8c07859d436a60e30e827c95ba57"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.174842 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" event={"ID":"fd1844c2-cd01-475a-b2fa-e49c9223b7b4","Type":"ContainerStarted","Data":"c8804f5fc54b89552cd38fcd276e7327483429f6fd7c6cd675e8e94293654954"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.186751 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" event={"ID":"c12be72a-ac87-4e8f-a061-b68b3f5cb115","Type":"ContainerStarted","Data":"38ef6f70fa7dc46ef48b69050aaa5038671baaa069904f60e647e895f628b9e7"} Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.192138 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/openstack-k8s-operators/watcher-operator:8bc1d666785d4af8d6dd7e98e7f4704a89f18bd4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" podUID="c12be72a-ac87-4e8f-a061-b68b3f5cb115" Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.215299 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" podUID="62fae6d8-3c6a-403c-9cc6-463e41a0bbe7" Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.240121 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" event={"ID":"f52a27b2-d045-4a4b-8fe5-0160004d9a5f","Type":"ContainerStarted","Data":"c861fa51fbed6ffd3432ce2e2c0779e290519bd840284a6c0bbc15462bdebf9f"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.240180 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" event={"ID":"cbea13a0-662c-4a51-9cfa-a0904713fc0f","Type":"ContainerStarted","Data":"887bedb626317735a1b99dc719f2c7fc055c9d6143e31ff37ff37d42ddaa1bde"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.240198 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" event={"ID":"9b7f6902-b444-48d4-b2d2-7342e62c8811","Type":"ContainerStarted","Data":"780146b54c7851094274c3582e6e69bad3b77a419fdf4bb80df900371fc165ed"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.240221 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" event={"ID":"2bc2193d-47f3-470a-a773-db2124fc8351","Type":"ContainerStarted","Data":"b75d0beea0183718d414c6a7add95b7fa9d37f9c3cb665b5093d8e300263af28"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.240233 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" event={"ID":"2bc2193d-47f3-470a-a773-db2124fc8351","Type":"ContainerStarted","Data":"bfd9519debe4ad33b3b1a35a33d92ab874d173c88a34603326f95182e36b5294"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.254566 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" event={"ID":"62fae6d8-3c6a-403c-9cc6-463e41a0bbe7","Type":"ContainerStarted","Data":"b4a2b86bb28a359bfc719c0d93452d0c0c13c07cb2722564fcd3b43aecad879e"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.263237 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" event={"ID":"672e5a98-1fd6-4667-9a55-6a84ea13d77c","Type":"ContainerStarted","Data":"fbf95e43641a5ac24d5d68c9dd4792c51e6c5b5c1efafc4265ca47e97d5ead4c"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.263288 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" event={"ID":"672e5a98-1fd6-4667-9a55-6a84ea13d77c","Type":"ContainerStarted","Data":"36bdeef612dfc720abc941422d072a638e2a145931eccb3bc7e5a51c2ae06a4f"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.267404 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" event={"ID":"061bb0fd-451d-4d15-b979-a6ea9b833fb1","Type":"ContainerStarted","Data":"d2e58237c132d799bd1f7c2ebac2856537fd6ced9fc44f52b3030a0a866cc22f"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.267466 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" event={"ID":"061bb0fd-451d-4d15-b979-a6ea9b833fb1","Type":"ContainerStarted","Data":"be72488d35ee02abf82023c4661d63538605e4eed7f4b6672fc43a43a4fbfc36"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.269214 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" event={"ID":"12b96437-95ee-4267-8eb2-569b9a93ef8d","Type":"ContainerStarted","Data":"c579e9a8526bfdeb33a8fe78022d5a2dbb91688dbbe0d8ae74113ae05a76b8f1"} Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.271506 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" podUID="2bc2193d-47f3-470a-a773-db2124fc8351" Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.271522 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:1463c43243c75f56609cbae6bee2f86d411107181775721cb097cbd22fcae1d1\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" podUID="62fae6d8-3c6a-403c-9cc6-463e41a0bbe7" Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.271575 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" podUID="061bb0fd-451d-4d15-b979-a6ea9b833fb1" Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.273798 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" event={"ID":"c117d831-6ff8-4e04-833a-242c22702cc3","Type":"ContainerStarted","Data":"cc134d578641e488294b65a241e171336dfebdffe4f87f5f60d87f75ea39456f"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.291513 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" event={"ID":"2db3dcee-6f5b-487e-b425-ec7be9530815","Type":"ContainerStarted","Data":"76f6ef44c466eae1a0060cc160c01b87fb5bee4a17d3bbc050a83923790dd9b1"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.301707 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" event={"ID":"e551e183-3965-40da-88e6-bbbcd6e3cbe5","Type":"ContainerStarted","Data":"7a57d8ca434338c1d4d9129cfc98e38087ec88b8a4d1d1268e851f4e799247c3"} Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.301770 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" event={"ID":"e551e183-3965-40da-88e6-bbbcd6e3cbe5","Type":"ContainerStarted","Data":"0c4b39052891b17e8f1df7c5106b5d2c1dc6dda4501ab1573e14d3da118145f5"} Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.305858 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:debe5d6d29a007374b270b0e114e69b2136eee61dabab8576baf4010c951edb9\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" podUID="e551e183-3965-40da-88e6-bbbcd6e3cbe5" Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.350609 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" event={"ID":"2d179ee0-ed61-44f8-80e8-622ee7ed3876","Type":"ContainerStarted","Data":"566c97558209d789cc7f05e846a4ecae794f7173af255ca27189a5258b9cb9ea"} Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.355321 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" podUID="2d179ee0-ed61-44f8-80e8-622ee7ed3876" Dec 09 09:01:03 crc kubenswrapper[4786]: I1209 09:01:03.364644 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" event={"ID":"ac272f60-c2a5-41a3-a48b-e499e7717667","Type":"ContainerStarted","Data":"6e495b38f02e77a0d4771e5fa1afeec3a8624e3f55a5977bf4d4e6bae62c0ed6"} Dec 09 09:01:03 crc kubenswrapper[4786]: E1209 09:01:03.367837 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:66928f0eae5206f671ac7b21f79953e37009c54187d768dc6e03fe0a3d202b3b\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" podUID="ac272f60-c2a5-41a3-a48b-e499e7717667" Dec 09 09:01:04 crc kubenswrapper[4786]: I1209 09:01:04.425686 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" event={"ID":"672e5a98-1fd6-4667-9a55-6a84ea13d77c","Type":"ContainerStarted","Data":"380407a25a98277c6e4327f523d2eefd328c5ffda77514ad53337927273d4d34"} Dec 09 09:01:04 crc kubenswrapper[4786]: I1209 09:01:04.425786 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:01:04 crc kubenswrapper[4786]: I1209 09:01:04.447175 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" event={"ID":"ac272f60-c2a5-41a3-a48b-e499e7717667","Type":"ContainerStarted","Data":"f2bdfeb502d51acce0c4cb67d5ba268d19da08a170562c502191a16852159352"} Dec 09 09:01:04 crc kubenswrapper[4786]: E1209 09:01:04.449610 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:66928f0eae5206f671ac7b21f79953e37009c54187d768dc6e03fe0a3d202b3b\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" podUID="ac272f60-c2a5-41a3-a48b-e499e7717667" Dec 09 09:01:04 crc kubenswrapper[4786]: I1209 09:01:04.470796 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" event={"ID":"62fae6d8-3c6a-403c-9cc6-463e41a0bbe7","Type":"ContainerStarted","Data":"262306a0f72bacb728ce33a221571c091fcc1a561969e90d5da40e3d12f96ff6"} Dec 09 09:01:04 crc kubenswrapper[4786]: E1209 09:01:04.472929 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:1463c43243c75f56609cbae6bee2f86d411107181775721cb097cbd22fcae1d1\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" podUID="62fae6d8-3c6a-403c-9cc6-463e41a0bbe7" Dec 09 09:01:04 crc kubenswrapper[4786]: I1209 09:01:04.486092 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" podStartSLOduration=6.486067148 podStartE2EDuration="6.486067148s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:01:04.477555465 +0000 UTC m=+1030.361176711" watchObservedRunningTime="2025-12-09 09:01:04.486067148 +0000 UTC m=+1030.369688374" Dec 09 09:01:04 crc kubenswrapper[4786]: I1209 09:01:04.496557 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" event={"ID":"c12be72a-ac87-4e8f-a061-b68b3f5cb115","Type":"ContainerStarted","Data":"f175f97e8e2a8bca34ff79b83773dc133a172c080028336487f2d45126d1bb2e"} Dec 09 09:01:04 crc kubenswrapper[4786]: E1209 09:01:04.510402 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" podUID="2d179ee0-ed61-44f8-80e8-622ee7ed3876" Dec 09 09:01:04 crc kubenswrapper[4786]: E1209 09:01:04.510568 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/openstack-k8s-operators/watcher-operator:8bc1d666785d4af8d6dd7e98e7f4704a89f18bd4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" podUID="c12be72a-ac87-4e8f-a061-b68b3f5cb115" Dec 09 09:01:04 crc kubenswrapper[4786]: E1209 09:01:04.510664 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:2c837009de6475bc22534827c03df6d8649277b71f1c30de2087b6c52aafb326\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" podUID="2bc2193d-47f3-470a-a773-db2124fc8351" Dec 09 09:01:04 crc kubenswrapper[4786]: E1209 09:01:04.510741 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" podUID="061bb0fd-451d-4d15-b979-a6ea9b833fb1" Dec 09 09:01:04 crc kubenswrapper[4786]: E1209 09:01:04.510818 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:debe5d6d29a007374b270b0e114e69b2136eee61dabab8576baf4010c951edb9\\\"\"" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" podUID="e551e183-3965-40da-88e6-bbbcd6e3cbe5" Dec 09 09:01:04 crc kubenswrapper[4786]: E1209 09:01:04.511395 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907\\\"\"" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" podUID="c0263a18-de54-4c70-9ef7-508d86abed06" Dec 09 09:01:05 crc kubenswrapper[4786]: E1209 09:01:05.509529 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/openstack-k8s-operators/watcher-operator:8bc1d666785d4af8d6dd7e98e7f4704a89f18bd4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" podUID="c12be72a-ac87-4e8f-a061-b68b3f5cb115" Dec 09 09:01:05 crc kubenswrapper[4786]: E1209 09:01:05.509598 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:66928f0eae5206f671ac7b21f79953e37009c54187d768dc6e03fe0a3d202b3b\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" podUID="ac272f60-c2a5-41a3-a48b-e499e7717667" Dec 09 09:01:05 crc kubenswrapper[4786]: E1209 09:01:05.509587 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:1463c43243c75f56609cbae6bee2f86d411107181775721cb097cbd22fcae1d1\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" podUID="62fae6d8-3c6a-403c-9cc6-463e41a0bbe7" Dec 09 09:01:10 crc kubenswrapper[4786]: I1209 09:01:10.550886 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-9969bcdf-xb28j" Dec 09 09:01:18 crc kubenswrapper[4786]: E1209 09:01:18.532449 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/infra-operator@sha256:e1a731922a2da70b224ce5396602a07cec2b4a79efe7bcdc17c5e4509d16b5e4" Dec 09 09:01:18 crc kubenswrapper[4786]: E1209 09:01:18.533142 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/infra-operator@sha256:e1a731922a2da70b224ce5396602a07cec2b4a79efe7bcdc17c5e4509d16b5e4,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{600 -3} {} 600m DecimalSI},memory: {{2147483648 0} {} 2Gi BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{536870912 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mgx79,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infra-operator-controller-manager-6c55d8d69b-w52pz_openstack-operators(d658b716-de31-47c0-a352-28f6260b0144): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:19 crc kubenswrapper[4786]: E1209 09:01:19.891788 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:5245e851b4476baecd4173eca3e8669ac09ec69d36ad1ebc3a0f867713cbc14b" Dec 09 09:01:19 crc kubenswrapper[4786]: E1209 09:01:19.899777 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:5245e851b4476baecd4173eca3e8669ac09ec69d36ad1ebc3a0f867713cbc14b,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mtrjc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-d5fb87cb8-2ql48_openstack-operators(c117d831-6ff8-4e04-833a-242c22702cc3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:21 crc kubenswrapper[4786]: E1209 09:01:21.265005 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:4f799c74da2f1c864af24fcd5efd91ec64848972a95246eac6b5c6c4d71c1756" Dec 09 09:01:21 crc kubenswrapper[4786]: E1209 09:01:21.265236 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:4f799c74da2f1c864af24fcd5efd91ec64848972a95246eac6b5c6c4d71c1756,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rm9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-79cc9d59f5-lfmfw_openstack-operators(9b7f6902-b444-48d4-b2d2-7342e62c8811): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:22 crc kubenswrapper[4786]: E1209 09:01:22.766690 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:2811f492f5663ec8660767dcb699060691c10dd809b1bb5f3a1f6b803946a653" Dec 09 09:01:22 crc kubenswrapper[4786]: E1209 09:01:22.766911 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:2811f492f5663ec8660767dcb699060691c10dd809b1bb5f3a1f6b803946a653,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2gf45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-7d5d9fd47f-rphwz_openstack-operators(2ebe7b51-643e-4700-bf2f-cbe9546ae563): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:23 crc kubenswrapper[4786]: E1209 09:01:23.790377 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:8aaaf8bb0a81358ee196af922d534c9b3f6bb47b27f4283087f7e0254638a671" Dec 09 09:01:23 crc kubenswrapper[4786]: E1209 09:01:23.790701 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:8aaaf8bb0a81358ee196af922d534c9b3f6bb47b27f4283087f7e0254638a671,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jqncr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-6788cc6d75-6dlzn_openstack-operators(2bd616d0-3367-48bb-94a5-a22302102b89): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:24 crc kubenswrapper[4786]: I1209 09:01:24.989585 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:01:24 crc kubenswrapper[4786]: I1209 09:01:24.989677 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:01:24 crc kubenswrapper[4786]: I1209 09:01:24.989779 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:01:24 crc kubenswrapper[4786]: I1209 09:01:24.990542 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"23158cb54ec78bf37e1faebd995cbf384dcd1c26c5f2777f244e3c9e75de0774"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:01:24 crc kubenswrapper[4786]: I1209 09:01:24.990602 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://23158cb54ec78bf37e1faebd995cbf384dcd1c26c5f2777f244e3c9e75de0774" gracePeriod=600 Dec 09 09:01:25 crc kubenswrapper[4786]: E1209 09:01:25.344410 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:1739eeeb2c05142ddf835739758ffd04ad06cad353125e2ceff687f237ecda57" Dec 09 09:01:25 crc kubenswrapper[4786]: E1209 09:01:25.344739 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:1739eeeb2c05142ddf835739758ffd04ad06cad353125e2ceff687f237ecda57,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qhrlp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-698d6fd7d6-mft9w_openstack-operators(f52a27b2-d045-4a4b-8fe5-0160004d9a5f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:25 crc kubenswrapper[4786]: E1209 09:01:25.941949 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:49180c7bd4f0071e43ae7044260a3a97c4aa34fcbcb2d0d4573df449765ed391" Dec 09 09:01:25 crc kubenswrapper[4786]: E1209 09:01:25.942178 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:49180c7bd4f0071e43ae7044260a3a97c4aa34fcbcb2d0d4573df449765ed391,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-d2hbh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-bb86466d8-m6pzg_openstack-operators(7229805c-3f98-437c-a3fe-b4031a2b7fa6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:26 crc kubenswrapper[4786]: I1209 09:01:26.141393 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="23158cb54ec78bf37e1faebd995cbf384dcd1c26c5f2777f244e3c9e75de0774" exitCode=0 Dec 09 09:01:26 crc kubenswrapper[4786]: I1209 09:01:26.141487 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"23158cb54ec78bf37e1faebd995cbf384dcd1c26c5f2777f244e3c9e75de0774"} Dec 09 09:01:26 crc kubenswrapper[4786]: I1209 09:01:26.141541 4786 scope.go:117] "RemoveContainer" containerID="c1a37e915d2cb26d1e193f254aff1e4395db9084078cc5d3a08303be12c3b59b" Dec 09 09:01:29 crc kubenswrapper[4786]: E1209 09:01:29.760335 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:57d9cb0034a7d5c7a39410fcb619ade2010e6855344dc3a0bc2bfd98cdf345d8" Dec 09 09:01:29 crc kubenswrapper[4786]: E1209 09:01:29.761157 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:57d9cb0034a7d5c7a39410fcb619ade2010e6855344dc3a0bc2bfd98cdf345d8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xhj7j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5cbc8c7f96-9qxcn_openstack-operators(12b96437-95ee-4267-8eb2-569b9a93ef8d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:34 crc kubenswrapper[4786]: E1209 09:01:34.354039 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312" Dec 09 09:01:34 crc kubenswrapper[4786]: E1209 09:01:34.355822 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-v55jw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-64d7c556cd-qqwdd_openstack-operators(061bb0fd-451d-4d15-b979-a6ea9b833fb1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:34 crc kubenswrapper[4786]: E1209 09:01:34.357476 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" podUID="061bb0fd-451d-4d15-b979-a6ea9b833fb1" Dec 09 09:01:35 crc kubenswrapper[4786]: E1209 09:01:35.553035 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907" Dec 09 09:01:35 crc kubenswrapper[4786]: E1209 09:01:35.553567 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xv6lk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-8f6687c44-lfcn5_openstack-operators(c0263a18-de54-4c70-9ef7-508d86abed06): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:35 crc kubenswrapper[4786]: E1209 09:01:35.555359 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" podUID="c0263a18-de54-4c70-9ef7-508d86abed06" Dec 09 09:01:46 crc kubenswrapper[4786]: E1209 09:01:46.215408 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2c4fe20e044dd8ea1f60f2f3f5e3844d932b4b79439835bd8771c73f16b38312\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" podUID="061bb0fd-451d-4d15-b979-a6ea9b833fb1" Dec 09 09:01:48 crc kubenswrapper[4786]: E1209 09:01:48.387917 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:f4b6baa2b8a661351cfc24fff5aacee5aa4198106618700cfa47ec3a75f88b31" Dec 09 09:01:48 crc kubenswrapper[4786]: E1209 09:01:48.388705 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:f4b6baa2b8a661351cfc24fff5aacee5aa4198106618700cfa47ec3a75f88b31,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4szfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-85fbd69fcd-q9mt5_openstack-operators(ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:49 crc kubenswrapper[4786]: E1209 09:01:49.190123 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f076b8d9e85881d9c3cb5272b13db7f5e05d2e9da884c17b677a844112831907\\\"\"" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" podUID="c0263a18-de54-4c70-9ef7-508d86abed06" Dec 09 09:01:51 crc kubenswrapper[4786]: E1209 09:01:51.144618 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 09 09:01:51 crc kubenswrapper[4786]: E1209 09:01:51.145105 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xpvjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-744dg_openstack-operators(2d179ee0-ed61-44f8-80e8-622ee7ed3876): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:51 crc kubenswrapper[4786]: E1209 09:01:51.146510 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" podUID="2d179ee0-ed61-44f8-80e8-622ee7ed3876" Dec 09 09:01:51 crc kubenswrapper[4786]: E1209 09:01:51.815299 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/openstack-k8s-operators/watcher-operator:8bc1d666785d4af8d6dd7e98e7f4704a89f18bd4" Dec 09 09:01:51 crc kubenswrapper[4786]: E1209 09:01:51.815403 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/openstack-k8s-operators/watcher-operator:8bc1d666785d4af8d6dd7e98e7f4704a89f18bd4" Dec 09 09:01:51 crc kubenswrapper[4786]: E1209 09:01:51.815782 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.200:5001/openstack-k8s-operators/watcher-operator:8bc1d666785d4af8d6dd7e98e7f4704a89f18bd4,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5d86d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-5d7f5df9d6-kwmgc_openstack-operators(c12be72a-ac87-4e8f-a061-b68b3f5cb115): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:01:51 crc kubenswrapper[4786]: E1209 09:01:51.817000 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" podUID="c12be72a-ac87-4e8f-a061-b68b3f5cb115" Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.224917 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" podUID="2bd616d0-3367-48bb-94a5-a22302102b89" Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.227786 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" podUID="9b7f6902-b444-48d4-b2d2-7342e62c8811" Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.231781 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" podUID="2ebe7b51-643e-4700-bf2f-cbe9546ae563" Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.231959 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" podUID="d658b716-de31-47c0-a352-28f6260b0144" Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.341359 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" podUID="ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736" Dec 09 09:01:52 crc kubenswrapper[4786]: I1209 09:01:52.552962 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" event={"ID":"d658b716-de31-47c0-a352-28f6260b0144","Type":"ContainerStarted","Data":"ac1fbefd19d55eb43d46883dbecb4b02bea0d9954b705ba72bb0ae4671570498"} Dec 09 09:01:52 crc kubenswrapper[4786]: I1209 09:01:52.560588 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" event={"ID":"2bd616d0-3367-48bb-94a5-a22302102b89","Type":"ContainerStarted","Data":"e23fc47701befbce41e5ece4c3456cc6ba815cf340b06a8ce8b1614caecae667"} Dec 09 09:01:52 crc kubenswrapper[4786]: I1209 09:01:52.569944 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" event={"ID":"2db3dcee-6f5b-487e-b425-ec7be9530815","Type":"ContainerStarted","Data":"dc0c9e532e492f2479ce47a2c0e82a9e4090b0034054cbd2132c1d881f3098c3"} Dec 09 09:01:52 crc kubenswrapper[4786]: I1209 09:01:52.587579 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"4cca8363f81421e3823d923dacff94c757cc32e0b6c3dae834013b3d2653acca"} Dec 09 09:01:52 crc kubenswrapper[4786]: I1209 09:01:52.594243 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" event={"ID":"9b7f6902-b444-48d4-b2d2-7342e62c8811","Type":"ContainerStarted","Data":"7caf02a7b176e4b55cb31530beee990c4e76c184cfd97d095ed0516e167dd609"} Dec 09 09:01:52 crc kubenswrapper[4786]: I1209 09:01:52.603904 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" event={"ID":"ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736","Type":"ContainerStarted","Data":"2d6344b806594c4f82952415d123c68df5e634838ce1669920c2c28e036aac73"} Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.604839 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:f4b6baa2b8a661351cfc24fff5aacee5aa4198106618700cfa47ec3a75f88b31\\\"\"" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" podUID="ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736" Dec 09 09:01:52 crc kubenswrapper[4786]: I1209 09:01:52.612828 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" event={"ID":"2ebe7b51-643e-4700-bf2f-cbe9546ae563","Type":"ContainerStarted","Data":"c6564a63f2aa5809752265fec49a42242d61c2d66e0bbcbae7a1a8c01d30151f"} Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.646150 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" podUID="f52a27b2-d045-4a4b-8fe5-0160004d9a5f" Dec 09 09:01:52 crc kubenswrapper[4786]: I1209 09:01:52.652056 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" event={"ID":"fd1844c2-cd01-475a-b2fa-e49c9223b7b4","Type":"ContainerStarted","Data":"ef8ea6059af9d8e1b7e08f67bfe8e7666db62fe88bc8e8297b930b793119c7ff"} Dec 09 09:01:52 crc kubenswrapper[4786]: I1209 09:01:52.678695 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" event={"ID":"cbea13a0-662c-4a51-9cfa-a0904713fc0f","Type":"ContainerStarted","Data":"afd4c18baee142ab165c683c9dca051fde24a756625bca52a1e33b574e10bb74"} Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.907789 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" podUID="7229805c-3f98-437c-a3fe-b4031a2b7fa6" Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.911125 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" podUID="c117d831-6ff8-4e04-833a-242c22702cc3" Dec 09 09:01:52 crc kubenswrapper[4786]: E1209 09:01:52.917391 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" podUID="12b96437-95ee-4267-8eb2-569b9a93ef8d" Dec 09 09:01:53 crc kubenswrapper[4786]: I1209 09:01:53.798883 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" event={"ID":"cbea13a0-662c-4a51-9cfa-a0904713fc0f","Type":"ContainerStarted","Data":"58b4f04222852a10414f0bd27a7d7c36ae8673ea1ed33642b6321f7b97733c0a"} Dec 09 09:01:53 crc kubenswrapper[4786]: I1209 09:01:53.799438 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.121122 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" event={"ID":"e551e183-3965-40da-88e6-bbbcd6e3cbe5","Type":"ContainerStarted","Data":"e7b7dda13b4f96b9815810aac59b53723927c694df52a75c92c548559872d2fc"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.121673 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.126499 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" event={"ID":"62fae6d8-3c6a-403c-9cc6-463e41a0bbe7","Type":"ContainerStarted","Data":"fba27386ef76a30de788d1cd91b09eecba53e0f19856b94a540a3f4f9f73b415"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.126813 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.137148 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" event={"ID":"ac272f60-c2a5-41a3-a48b-e499e7717667","Type":"ContainerStarted","Data":"d5c2ec8074056a44f1f1528c4af431f10b1624e3afce9f8a26d6dc294304ab61"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.137621 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.152226 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" podStartSLOduration=31.376012875 podStartE2EDuration="56.15220232s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.675833688 +0000 UTC m=+1028.559454914" lastFinishedPulling="2025-12-09 09:01:27.452023133 +0000 UTC m=+1053.335644359" observedRunningTime="2025-12-09 09:01:54.148311753 +0000 UTC m=+1080.031932979" watchObservedRunningTime="2025-12-09 09:01:54.15220232 +0000 UTC m=+1080.035823546" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.160108 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" event={"ID":"405fe0da-3e24-42cd-b73d-9d0cfe700614","Type":"ContainerStarted","Data":"9afda1ac527544c22379e04805c3651df17bc5722558f4d7faecf5c412c08c14"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.168846 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" event={"ID":"2db3dcee-6f5b-487e-b425-ec7be9530815","Type":"ContainerStarted","Data":"a910375fec267c2c3d30bef981d4f0ec128b4e83b4068ba8a9c380fab86af0da"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.169644 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.177456 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" event={"ID":"12b96437-95ee-4267-8eb2-569b9a93ef8d","Type":"ContainerStarted","Data":"b64927d9836ecdc19fc0e5c145d80d08c00ce8a587f5ec9f51cd5b1d973aa2b9"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.193678 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" event={"ID":"0ebdf904-eeaa-4d7b-8f51-10e721a91538","Type":"ContainerStarted","Data":"ae51aaeb90f72c1634c09c950bf208b3e70a73ba27eaa05b599b78ebb3260507"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.193743 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" event={"ID":"0ebdf904-eeaa-4d7b-8f51-10e721a91538","Type":"ContainerStarted","Data":"092b924f0299d31ae2825ae933711996604b074c442389b51527c3290ac565be"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.193964 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.206989 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" event={"ID":"f52a27b2-d045-4a4b-8fe5-0160004d9a5f","Type":"ContainerStarted","Data":"c5b7e47811618eaed1d44d5b86899b21022537daef9ccf67fa49d6884f92d753"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.216378 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" podStartSLOduration=7.286997723 podStartE2EDuration="56.216347928s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.715242796 +0000 UTC m=+1028.598864022" lastFinishedPulling="2025-12-09 09:01:51.644593001 +0000 UTC m=+1077.528214227" observedRunningTime="2025-12-09 09:01:54.199249022 +0000 UTC m=+1080.082870248" watchObservedRunningTime="2025-12-09 09:01:54.216347928 +0000 UTC m=+1080.099969154" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.219872 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" event={"ID":"7229805c-3f98-437c-a3fe-b4031a2b7fa6","Type":"ContainerStarted","Data":"585cdc565953bae893519352a578a2a49446715866e1093b1b86d5cd785ced10"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.227759 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" event={"ID":"c117d831-6ff8-4e04-833a-242c22702cc3","Type":"ContainerStarted","Data":"54365351d4b1399747c68ec92704a22015376079f376c3fab4c73fc84d718a50"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.240150 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" event={"ID":"2bc2193d-47f3-470a-a773-db2124fc8351","Type":"ContainerStarted","Data":"af04f1dc71a7340771c5915d58fed87e1b40493cc1c1cb773183f7e209855d03"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.240475 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.245270 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" podStartSLOduration=7.288759405 podStartE2EDuration="56.245246509s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.688164708 +0000 UTC m=+1028.571785934" lastFinishedPulling="2025-12-09 09:01:51.644651812 +0000 UTC m=+1077.528273038" observedRunningTime="2025-12-09 09:01:54.243354341 +0000 UTC m=+1080.126975567" watchObservedRunningTime="2025-12-09 09:01:54.245246509 +0000 UTC m=+1080.128867735" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.250828 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" event={"ID":"fd1844c2-cd01-475a-b2fa-e49c9223b7b4","Type":"ContainerStarted","Data":"5851b1991adb7c8ebba96233bf73c1fd3e6b4702f325efede16e7714cdf8623e"} Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.251270 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" Dec 09 09:01:54 crc kubenswrapper[4786]: E1209 09:01:54.255774 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:f4b6baa2b8a661351cfc24fff5aacee5aa4198106618700cfa47ec3a75f88b31\\\"\"" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" podUID="ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.314002 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" podStartSLOduration=7.837086531 podStartE2EDuration="56.313969411s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.715022421 +0000 UTC m=+1028.598643647" lastFinishedPulling="2025-12-09 09:01:51.191905301 +0000 UTC m=+1077.075526527" observedRunningTime="2025-12-09 09:01:54.2726236 +0000 UTC m=+1080.156244826" watchObservedRunningTime="2025-12-09 09:01:54.313969411 +0000 UTC m=+1080.197590647" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.438129 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" podStartSLOduration=29.133056823 podStartE2EDuration="56.438096805s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:01.438612367 +0000 UTC m=+1027.322233593" lastFinishedPulling="2025-12-09 09:01:28.743652349 +0000 UTC m=+1054.627273575" observedRunningTime="2025-12-09 09:01:54.375121686 +0000 UTC m=+1080.258742912" watchObservedRunningTime="2025-12-09 09:01:54.438096805 +0000 UTC m=+1080.321718031" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.575924 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" podStartSLOduration=30.491084251 podStartE2EDuration="56.575890708s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.659185561 +0000 UTC m=+1028.542806787" lastFinishedPulling="2025-12-09 09:01:28.743992018 +0000 UTC m=+1054.627613244" observedRunningTime="2025-12-09 09:01:54.524010295 +0000 UTC m=+1080.407631521" watchObservedRunningTime="2025-12-09 09:01:54.575890708 +0000 UTC m=+1080.459511934" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.584667 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" podStartSLOduration=7.676137571 podStartE2EDuration="56.584639726s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.715566745 +0000 UTC m=+1028.599187971" lastFinishedPulling="2025-12-09 09:01:51.6240689 +0000 UTC m=+1077.507690126" observedRunningTime="2025-12-09 09:01:54.572896014 +0000 UTC m=+1080.456517250" watchObservedRunningTime="2025-12-09 09:01:54.584639726 +0000 UTC m=+1080.468260952" Dec 09 09:01:54 crc kubenswrapper[4786]: I1209 09:01:54.712944 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" podStartSLOduration=30.644490207 podStartE2EDuration="56.712902332s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.675274995 +0000 UTC m=+1028.558896221" lastFinishedPulling="2025-12-09 09:01:28.74368712 +0000 UTC m=+1054.627308346" observedRunningTime="2025-12-09 09:01:54.708742709 +0000 UTC m=+1080.592363955" watchObservedRunningTime="2025-12-09 09:01:54.712902332 +0000 UTC m=+1080.596523558" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.261196 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" event={"ID":"2bd616d0-3367-48bb-94a5-a22302102b89","Type":"ContainerStarted","Data":"9f60cc5ba308e834d95e60d5e2faf0fd23f78cd4981fe340cd62ddad9cb4331a"} Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.263100 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.278652 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" event={"ID":"405fe0da-3e24-42cd-b73d-9d0cfe700614","Type":"ContainerStarted","Data":"6baefc36f45d612f6cf534de79593e8dbe43e96ed0e8ee6c0c535074bcf5acd0"} Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.279545 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.282230 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" event={"ID":"9b7f6902-b444-48d4-b2d2-7342e62c8811","Type":"ContainerStarted","Data":"2a93a813f098807389d6cfedb63c05e4f49b900e294887e4087b8f9fea2a2b6f"} Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.283020 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.285749 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" event={"ID":"2ebe7b51-643e-4700-bf2f-cbe9546ae563","Type":"ContainerStarted","Data":"2d072cf5795249f49972462b8b42e3de0e8f2619a41e4d83a48c28cf73b12513"} Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.286221 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.290040 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" event={"ID":"d658b716-de31-47c0-a352-28f6260b0144","Type":"ContainerStarted","Data":"0c40a6eaa0def3d4094ec5894010d1e8c60253174b4f523e05ae04d5928f53a3"} Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.290258 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.369330 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" podStartSLOduration=6.564591124 podStartE2EDuration="57.36931393s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.662442513 +0000 UTC m=+1028.546063749" lastFinishedPulling="2025-12-09 09:01:53.467165329 +0000 UTC m=+1079.350786555" observedRunningTime="2025-12-09 09:01:55.368030848 +0000 UTC m=+1081.251652074" watchObservedRunningTime="2025-12-09 09:01:55.36931393 +0000 UTC m=+1081.252935156" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.373265 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" podStartSLOduration=6.869737649 podStartE2EDuration="57.373248507s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.675155832 +0000 UTC m=+1028.558777058" lastFinishedPulling="2025-12-09 09:01:53.17866669 +0000 UTC m=+1079.062287916" observedRunningTime="2025-12-09 09:01:55.319244362 +0000 UTC m=+1081.202865618" watchObservedRunningTime="2025-12-09 09:01:55.373248507 +0000 UTC m=+1081.256869733" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.408205 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" podStartSLOduration=31.343043965 podStartE2EDuration="57.408181748s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:01.385850524 +0000 UTC m=+1027.269471750" lastFinishedPulling="2025-12-09 09:01:27.450988307 +0000 UTC m=+1053.334609533" observedRunningTime="2025-12-09 09:01:55.403242155 +0000 UTC m=+1081.286863381" watchObservedRunningTime="2025-12-09 09:01:55.408181748 +0000 UTC m=+1081.291802974" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.525753 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" podStartSLOduration=5.784382072 podStartE2EDuration="57.525728077s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:01.438619257 +0000 UTC m=+1027.322240493" lastFinishedPulling="2025-12-09 09:01:53.179965272 +0000 UTC m=+1079.063586498" observedRunningTime="2025-12-09 09:01:55.501568385 +0000 UTC m=+1081.385189611" watchObservedRunningTime="2025-12-09 09:01:55.525728077 +0000 UTC m=+1081.409349303" Dec 09 09:01:55 crc kubenswrapper[4786]: I1209 09:01:55.526362 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" podStartSLOduration=6.714746885 podStartE2EDuration="57.526353213s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.658768891 +0000 UTC m=+1028.542390117" lastFinishedPulling="2025-12-09 09:01:53.470375219 +0000 UTC m=+1079.353996445" observedRunningTime="2025-12-09 09:01:55.450982015 +0000 UTC m=+1081.334603251" watchObservedRunningTime="2025-12-09 09:01:55.526353213 +0000 UTC m=+1081.409974439" Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.307153 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" event={"ID":"c117d831-6ff8-4e04-833a-242c22702cc3","Type":"ContainerStarted","Data":"fc8b8d33f9f39936f56b138cbff271a241169679205015e744914ab872fdb88d"} Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.308270 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.311242 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" event={"ID":"12b96437-95ee-4267-8eb2-569b9a93ef8d","Type":"ContainerStarted","Data":"edb718a068b142da9517d707ad63599d7c8907b3cd168025b4607a09e427aa86"} Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.311355 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.313772 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" event={"ID":"f52a27b2-d045-4a4b-8fe5-0160004d9a5f","Type":"ContainerStarted","Data":"365cd671a818ab22dedc13228f750e2d1a60128bcf97b86603074bec35156998"} Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.313922 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.318233 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" event={"ID":"7229805c-3f98-437c-a3fe-b4031a2b7fa6","Type":"ContainerStarted","Data":"faf95c9dded4e4aca727d62ef895d9515f0e65943aac7c3f3396e5546c10095a"} Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.318305 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.333502 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" podStartSLOduration=5.630654833 podStartE2EDuration="58.333481586s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.678727911 +0000 UTC m=+1028.562349137" lastFinishedPulling="2025-12-09 09:01:55.381554664 +0000 UTC m=+1081.265175890" observedRunningTime="2025-12-09 09:01:56.330467531 +0000 UTC m=+1082.214088767" watchObservedRunningTime="2025-12-09 09:01:56.333481586 +0000 UTC m=+1082.217102812" Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.384380 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" podStartSLOduration=5.675585231 podStartE2EDuration="58.384348404s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.670626478 +0000 UTC m=+1028.554247704" lastFinishedPulling="2025-12-09 09:01:55.379389651 +0000 UTC m=+1081.263010877" observedRunningTime="2025-12-09 09:01:56.357850334 +0000 UTC m=+1082.241471590" watchObservedRunningTime="2025-12-09 09:01:56.384348404 +0000 UTC m=+1082.267969630" Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.403758 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" podStartSLOduration=5.699476916 podStartE2EDuration="58.403725186s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.675836468 +0000 UTC m=+1028.559457704" lastFinishedPulling="2025-12-09 09:01:55.380084748 +0000 UTC m=+1081.263705974" observedRunningTime="2025-12-09 09:01:56.399223954 +0000 UTC m=+1082.282845200" watchObservedRunningTime="2025-12-09 09:01:56.403725186 +0000 UTC m=+1082.287346412" Dec 09 09:01:56 crc kubenswrapper[4786]: I1209 09:01:56.407722 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" podStartSLOduration=5.681390962 podStartE2EDuration="58.407700545s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.654966145 +0000 UTC m=+1028.538587371" lastFinishedPulling="2025-12-09 09:01:55.381275728 +0000 UTC m=+1081.264896954" observedRunningTime="2025-12-09 09:01:56.378805946 +0000 UTC m=+1082.262427162" watchObservedRunningTime="2025-12-09 09:01:56.407700545 +0000 UTC m=+1082.291321781" Dec 09 09:01:57 crc kubenswrapper[4786]: I1209 09:01:57.357225 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-748967c98-w7gzc" Dec 09 09:01:58 crc kubenswrapper[4786]: I1209 09:01:58.369403 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-5bfbbb859d-pldv5" Dec 09 09:01:58 crc kubenswrapper[4786]: I1209 09:01:58.698691 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-54485f899-kk584" Dec 09 09:01:59 crc kubenswrapper[4786]: I1209 09:01:59.778160 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-58879495c-glr8m" Dec 09 09:01:59 crc kubenswrapper[4786]: I1209 09:01:59.875603 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-79d658b66d-rrlrt" Dec 09 09:01:59 crc kubenswrapper[4786]: I1209 09:01:59.962186 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77868f484-qkq7f" Dec 09 09:01:59 crc kubenswrapper[4786]: I1209 09:01:59.990679 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-5b67cfc8fb-pwfxf" Dec 09 09:02:00 crc kubenswrapper[4786]: I1209 09:02:00.088070 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-867d87977b-4mg9x" Dec 09 09:02:00 crc kubenswrapper[4786]: I1209 09:02:00.117644 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-695797c565-h57cz" Dec 09 09:02:00 crc kubenswrapper[4786]: I1209 09:02:00.135324 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-bb86466d8-m6pzg" Dec 09 09:02:02 crc kubenswrapper[4786]: I1209 09:02:02.390028 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" event={"ID":"061bb0fd-451d-4d15-b979-a6ea9b833fb1","Type":"ContainerStarted","Data":"84ba299eee154619df646006772712a5811f1d9f818e5fbfb10a5af0807034ef"} Dec 09 09:02:02 crc kubenswrapper[4786]: I1209 09:02:02.390922 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" Dec 09 09:02:02 crc kubenswrapper[4786]: I1209 09:02:02.393340 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" event={"ID":"c0263a18-de54-4c70-9ef7-508d86abed06","Type":"ContainerStarted","Data":"939c2450a564514466438e34e71c4c7b1cf85dfaa2a8372478a01011838eabeb"} Dec 09 09:02:02 crc kubenswrapper[4786]: I1209 09:02:02.393681 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" Dec 09 09:02:02 crc kubenswrapper[4786]: I1209 09:02:02.424014 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" podStartSLOduration=5.341500019 podStartE2EDuration="1m4.423984659s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.690409964 +0000 UTC m=+1028.574031190" lastFinishedPulling="2025-12-09 09:02:01.772894604 +0000 UTC m=+1087.656515830" observedRunningTime="2025-12-09 09:02:02.417733423 +0000 UTC m=+1088.301354659" watchObservedRunningTime="2025-12-09 09:02:02.423984659 +0000 UTC m=+1088.307605885" Dec 09 09:02:02 crc kubenswrapper[4786]: I1209 09:02:02.434995 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" podStartSLOduration=5.426463566 podStartE2EDuration="1m4.434974253s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.689731487 +0000 UTC m=+1028.573352713" lastFinishedPulling="2025-12-09 09:02:01.698242174 +0000 UTC m=+1087.581863400" observedRunningTime="2025-12-09 09:02:02.431574988 +0000 UTC m=+1088.315196224" watchObservedRunningTime="2025-12-09 09:02:02.434974253 +0000 UTC m=+1088.318595479" Dec 09 09:02:04 crc kubenswrapper[4786]: E1209 09:02:04.191675 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/openstack-k8s-operators/watcher-operator:8bc1d666785d4af8d6dd7e98e7f4704a89f18bd4\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" podUID="c12be72a-ac87-4e8f-a061-b68b3f5cb115" Dec 09 09:02:05 crc kubenswrapper[4786]: E1209 09:02:05.195298 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" podUID="2d179ee0-ed61-44f8-80e8-622ee7ed3876" Dec 09 09:02:07 crc kubenswrapper[4786]: I1209 09:02:07.429026 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" event={"ID":"ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736","Type":"ContainerStarted","Data":"00615c760407ad1f755a88399cc1867f46ab046020b3c3c4e478ca5a83f17e8a"} Dec 09 09:02:07 crc kubenswrapper[4786]: I1209 09:02:07.429576 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" Dec 09 09:02:07 crc kubenswrapper[4786]: I1209 09:02:07.445728 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" podStartSLOduration=4.328502315 podStartE2EDuration="1m9.445704788s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.098249347 +0000 UTC m=+1027.981870573" lastFinishedPulling="2025-12-09 09:02:07.21545179 +0000 UTC m=+1093.099073046" observedRunningTime="2025-12-09 09:02:07.442651272 +0000 UTC m=+1093.326272498" watchObservedRunningTime="2025-12-09 09:02:07.445704788 +0000 UTC m=+1093.329326024" Dec 09 09:02:08 crc kubenswrapper[4786]: I1209 09:02:08.442330 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6788cc6d75-6dlzn" Dec 09 09:02:08 crc kubenswrapper[4786]: I1209 09:02:08.487274 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-698d6fd7d6-mft9w" Dec 09 09:02:08 crc kubenswrapper[4786]: I1209 09:02:08.545066 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-7d5d9fd47f-rphwz" Dec 09 09:02:08 crc kubenswrapper[4786]: I1209 09:02:08.840736 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-79cc9d59f5-lfmfw" Dec 09 09:02:08 crc kubenswrapper[4786]: I1209 09:02:08.879220 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-6c55d8d69b-w52pz" Dec 09 09:02:09 crc kubenswrapper[4786]: I1209 09:02:09.284103 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-64d7c556cd-qqwdd" Dec 09 09:02:09 crc kubenswrapper[4786]: I1209 09:02:09.333116 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5cbc8c7f96-9qxcn" Dec 09 09:02:09 crc kubenswrapper[4786]: I1209 09:02:09.938887 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-d5fb87cb8-2ql48" Dec 09 09:02:10 crc kubenswrapper[4786]: I1209 09:02:10.105904 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-8f6687c44-lfcn5" Dec 09 09:02:18 crc kubenswrapper[4786]: I1209 09:02:18.473126 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-85fbd69fcd-q9mt5" Dec 09 09:02:19 crc kubenswrapper[4786]: I1209 09:02:19.530268 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" event={"ID":"2d179ee0-ed61-44f8-80e8-622ee7ed3876","Type":"ContainerStarted","Data":"d67ab062a9a2eced9f48566f78994d451c9df49256ba390fdb946889e72ba6d7"} Dec 09 09:02:19 crc kubenswrapper[4786]: I1209 09:02:19.548230 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-744dg" podStartSLOduration=4.5824869459999995 podStartE2EDuration="1m20.548209598s" podCreationTimestamp="2025-12-09 09:00:59 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.70063025 +0000 UTC m=+1028.584251466" lastFinishedPulling="2025-12-09 09:02:18.666352882 +0000 UTC m=+1104.549974118" observedRunningTime="2025-12-09 09:02:19.543295595 +0000 UTC m=+1105.426916831" watchObservedRunningTime="2025-12-09 09:02:19.548209598 +0000 UTC m=+1105.431830824" Dec 09 09:02:22 crc kubenswrapper[4786]: I1209 09:02:22.554903 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" event={"ID":"c12be72a-ac87-4e8f-a061-b68b3f5cb115","Type":"ContainerStarted","Data":"a516871860566700b823fd5abc9b1851bc32f07c6d0419c68c07f613c263c187"} Dec 09 09:02:22 crc kubenswrapper[4786]: I1209 09:02:22.555661 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" Dec 09 09:02:22 crc kubenswrapper[4786]: I1209 09:02:22.573001 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" podStartSLOduration=5.6144454889999995 podStartE2EDuration="1m24.572983383s" podCreationTimestamp="2025-12-09 09:00:58 +0000 UTC" firstStartedPulling="2025-12-09 09:01:02.753084655 +0000 UTC m=+1028.636705891" lastFinishedPulling="2025-12-09 09:02:21.711622549 +0000 UTC m=+1107.595243785" observedRunningTime="2025-12-09 09:02:22.571840684 +0000 UTC m=+1108.455461910" watchObservedRunningTime="2025-12-09 09:02:22.572983383 +0000 UTC m=+1108.456604609" Dec 09 09:02:30 crc kubenswrapper[4786]: I1209 09:02:30.154817 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-5d7f5df9d6-kwmgc" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.170565 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8469f54845-pxjx2"] Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.172448 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.176928 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.177531 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-fjvpm" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.177811 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.178108 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.183831 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8469f54845-pxjx2"] Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.185569 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8845\" (UniqueName: \"kubernetes.io/projected/eabff6c7-3d99-4392-956b-2e687c890c08-kube-api-access-q8845\") pod \"dnsmasq-dns-8469f54845-pxjx2\" (UID: \"eabff6c7-3d99-4392-956b-2e687c890c08\") " pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.185799 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabff6c7-3d99-4392-956b-2e687c890c08-config\") pod \"dnsmasq-dns-8469f54845-pxjx2\" (UID: \"eabff6c7-3d99-4392-956b-2e687c890c08\") " pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.204349 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6c45c59f45-x5hnh"] Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.205731 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.215572 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.229238 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c45c59f45-x5hnh"] Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.286809 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabff6c7-3d99-4392-956b-2e687c890c08-config\") pod \"dnsmasq-dns-8469f54845-pxjx2\" (UID: \"eabff6c7-3d99-4392-956b-2e687c890c08\") " pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.286928 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8845\" (UniqueName: \"kubernetes.io/projected/eabff6c7-3d99-4392-956b-2e687c890c08-kube-api-access-q8845\") pod \"dnsmasq-dns-8469f54845-pxjx2\" (UID: \"eabff6c7-3d99-4392-956b-2e687c890c08\") " pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.288546 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabff6c7-3d99-4392-956b-2e687c890c08-config\") pod \"dnsmasq-dns-8469f54845-pxjx2\" (UID: \"eabff6c7-3d99-4392-956b-2e687c890c08\") " pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.313589 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8845\" (UniqueName: \"kubernetes.io/projected/eabff6c7-3d99-4392-956b-2e687c890c08-kube-api-access-q8845\") pod \"dnsmasq-dns-8469f54845-pxjx2\" (UID: \"eabff6c7-3d99-4392-956b-2e687c890c08\") " pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.388353 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-config\") pod \"dnsmasq-dns-6c45c59f45-x5hnh\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.388593 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n45wx\" (UniqueName: \"kubernetes.io/projected/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-kube-api-access-n45wx\") pod \"dnsmasq-dns-6c45c59f45-x5hnh\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.388626 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-dns-svc\") pod \"dnsmasq-dns-6c45c59f45-x5hnh\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.490359 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-config\") pod \"dnsmasq-dns-6c45c59f45-x5hnh\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.490467 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n45wx\" (UniqueName: \"kubernetes.io/projected/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-kube-api-access-n45wx\") pod \"dnsmasq-dns-6c45c59f45-x5hnh\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.490489 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-dns-svc\") pod \"dnsmasq-dns-6c45c59f45-x5hnh\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.491469 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-dns-svc\") pod \"dnsmasq-dns-6c45c59f45-x5hnh\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.491719 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-config\") pod \"dnsmasq-dns-6c45c59f45-x5hnh\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.497653 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.530138 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n45wx\" (UniqueName: \"kubernetes.io/projected/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-kube-api-access-n45wx\") pod \"dnsmasq-dns-6c45c59f45-x5hnh\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:51 crc kubenswrapper[4786]: I1209 09:02:51.538590 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:02:52 crc kubenswrapper[4786]: I1209 09:02:52.397501 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8469f54845-pxjx2"] Dec 09 09:02:52 crc kubenswrapper[4786]: I1209 09:02:52.455252 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6c45c59f45-x5hnh"] Dec 09 09:02:52 crc kubenswrapper[4786]: I1209 09:02:52.848612 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8469f54845-pxjx2" event={"ID":"eabff6c7-3d99-4392-956b-2e687c890c08","Type":"ContainerStarted","Data":"486132097efa2f8dc6c7a9838e93e4d6707de40ec0eee9dd15eb738f02306967"} Dec 09 09:02:52 crc kubenswrapper[4786]: I1209 09:02:52.852174 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" event={"ID":"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd","Type":"ContainerStarted","Data":"27978fec90c31aeaa0a1cf0b449e9a1faca49403be69b3e85ffba3cfc32eb900"} Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.619257 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c45c59f45-x5hnh"] Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.665497 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d9f64f497-j4ztp"] Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.667153 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.688911 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d9f64f497-j4ztp"] Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.781407 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-962dc\" (UniqueName: \"kubernetes.io/projected/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-kube-api-access-962dc\") pod \"dnsmasq-dns-7d9f64f497-j4ztp\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.781556 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-dns-svc\") pod \"dnsmasq-dns-7d9f64f497-j4ztp\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.781627 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-config\") pod \"dnsmasq-dns-7d9f64f497-j4ztp\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.886382 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-962dc\" (UniqueName: \"kubernetes.io/projected/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-kube-api-access-962dc\") pod \"dnsmasq-dns-7d9f64f497-j4ztp\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.886595 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-dns-svc\") pod \"dnsmasq-dns-7d9f64f497-j4ztp\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.886661 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-config\") pod \"dnsmasq-dns-7d9f64f497-j4ztp\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.889780 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-config\") pod \"dnsmasq-dns-7d9f64f497-j4ztp\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.895637 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-dns-svc\") pod \"dnsmasq-dns-7d9f64f497-j4ztp\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.951230 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-962dc\" (UniqueName: \"kubernetes.io/projected/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-kube-api-access-962dc\") pod \"dnsmasq-dns-7d9f64f497-j4ztp\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:54 crc kubenswrapper[4786]: I1209 09:02:54.991193 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.021941 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8469f54845-pxjx2"] Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.075168 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fccf8cfdc-lrx7d"] Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.077689 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.091546 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fccf8cfdc-lrx7d"] Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.191314 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-config\") pod \"dnsmasq-dns-5fccf8cfdc-lrx7d\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.191474 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-dns-svc\") pod \"dnsmasq-dns-5fccf8cfdc-lrx7d\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.191506 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq9sf\" (UniqueName: \"kubernetes.io/projected/ef1296a6-369d-4b56-add2-08afba4b5efa-kube-api-access-qq9sf\") pod \"dnsmasq-dns-5fccf8cfdc-lrx7d\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.293660 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-dns-svc\") pod \"dnsmasq-dns-5fccf8cfdc-lrx7d\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.293735 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq9sf\" (UniqueName: \"kubernetes.io/projected/ef1296a6-369d-4b56-add2-08afba4b5efa-kube-api-access-qq9sf\") pod \"dnsmasq-dns-5fccf8cfdc-lrx7d\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.293784 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-config\") pod \"dnsmasq-dns-5fccf8cfdc-lrx7d\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.294762 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-config\") pod \"dnsmasq-dns-5fccf8cfdc-lrx7d\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.297130 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-dns-svc\") pod \"dnsmasq-dns-5fccf8cfdc-lrx7d\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.338700 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq9sf\" (UniqueName: \"kubernetes.io/projected/ef1296a6-369d-4b56-add2-08afba4b5efa-kube-api-access-qq9sf\") pod \"dnsmasq-dns-5fccf8cfdc-lrx7d\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.450529 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.580665 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fccf8cfdc-lrx7d"] Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.612746 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5949c6db59-hgk6z"] Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.614901 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.646360 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5949c6db59-hgk6z"] Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.712999 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v78km\" (UniqueName: \"kubernetes.io/projected/0cdd4a6b-af46-4028-b5de-b67767db09fc-kube-api-access-v78km\") pod \"dnsmasq-dns-5949c6db59-hgk6z\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.713129 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-dns-svc\") pod \"dnsmasq-dns-5949c6db59-hgk6z\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.713212 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-config\") pod \"dnsmasq-dns-5949c6db59-hgk6z\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.814781 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-dns-svc\") pod \"dnsmasq-dns-5949c6db59-hgk6z\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.815122 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-config\") pod \"dnsmasq-dns-5949c6db59-hgk6z\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.815199 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v78km\" (UniqueName: \"kubernetes.io/projected/0cdd4a6b-af46-4028-b5de-b67767db09fc-kube-api-access-v78km\") pod \"dnsmasq-dns-5949c6db59-hgk6z\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.816327 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-dns-svc\") pod \"dnsmasq-dns-5949c6db59-hgk6z\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.816888 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-config\") pod \"dnsmasq-dns-5949c6db59-hgk6z\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.847215 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.859247 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.862588 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.862850 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.863029 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.863763 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v78km\" (UniqueName: \"kubernetes.io/projected/0cdd4a6b-af46-4028-b5de-b67767db09fc-kube-api-access-v78km\") pod \"dnsmasq-dns-5949c6db59-hgk6z\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.863809 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.864228 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.864390 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.864545 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kd77g" Dec 09 09:02:55 crc kubenswrapper[4786]: I1209 09:02:55.991270 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.011409 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024258 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024308 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024356 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024399 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024468 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024526 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r7v7\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-kube-api-access-9r7v7\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024561 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024589 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024615 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024645 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.024668 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.028547 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d9f64f497-j4ztp"] Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.126724 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.127221 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.129550 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.129777 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.129953 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.130156 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.130274 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r7v7\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-kube-api-access-9r7v7\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.130359 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.130446 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.130474 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.130518 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.130539 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.132252 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-config-data\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.132951 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.134134 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.136180 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-pod-info\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.136558 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.145866 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-server-conf\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.164474 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.175631 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.176385 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.265093 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r7v7\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-kube-api-access-9r7v7\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.265111 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.280634 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.282344 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.295552 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.295794 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.295943 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.296105 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-b8rz8" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.296258 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.296483 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.296688 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.302912 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.338980 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01efacfa-e002-4e0d-aa6b-91217baa22ca-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339622 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339712 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01efacfa-e002-4e0d-aa6b-91217baa22ca-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339739 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339775 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbptq\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-kube-api-access-cbptq\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339818 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339851 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339887 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339909 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339943 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.339961 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451068 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01efacfa-e002-4e0d-aa6b-91217baa22ca-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451122 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451157 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbptq\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-kube-api-access-cbptq\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451200 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451280 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451317 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451347 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451373 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451398 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451486 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01efacfa-e002-4e0d-aa6b-91217baa22ca-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.451567 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.452403 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.452730 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.452795 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.452963 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.453277 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.456746 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01efacfa-e002-4e0d-aa6b-91217baa22ca-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.456851 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.458052 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01efacfa-e002-4e0d-aa6b-91217baa22ca-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.459612 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.466980 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.517383 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.621289 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbptq\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-kube-api-access-cbptq\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.626733 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.646776 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fccf8cfdc-lrx7d"] Dec 09 09:02:56 crc kubenswrapper[4786]: W1209 09:02:56.711360 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef1296a6_369d_4b56_add2_08afba4b5efa.slice/crio-23ed2c0a5e6c30925b1d72934f106578f6b9bb3063b0fd1cb76cad7b2fae255c WatchSource:0}: Error finding container 23ed2c0a5e6c30925b1d72934f106578f6b9bb3063b0fd1cb76cad7b2fae255c: Status 404 returned error can't find the container with id 23ed2c0a5e6c30925b1d72934f106578f6b9bb3063b0fd1cb76cad7b2fae255c Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.915252 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.918621 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.930994 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.931375 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.931472 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.931644 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.931846 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-dqrqz" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.931867 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.932010 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.932024 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.964184 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 09 09:02:56 crc kubenswrapper[4786]: I1209 09:02:56.990362 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" event={"ID":"09918a13-9fef-4347-b2cf-e6d1aa9bb29a","Type":"ContainerStarted","Data":"d0b2eb4d65da30be5172599a9a5d7c7cc71f4bdffa610a582e53ee6004248477"} Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.007790 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" event={"ID":"ef1296a6-369d-4b56-add2-08afba4b5efa","Type":"ContainerStarted","Data":"23ed2c0a5e6c30925b1d72934f106578f6b9bb3063b0fd1cb76cad7b2fae255c"} Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.064488 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5949c6db59-hgk6z"] Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122072 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122172 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c5ae8f0-bfa8-4fe2-81c3-289021674179-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122231 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122284 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdjkf\" (UniqueName: \"kubernetes.io/projected/6c5ae8f0-bfa8-4fe2-81c3-289021674179-kube-api-access-qdjkf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122313 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122379 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c5ae8f0-bfa8-4fe2-81c3-289021674179-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122438 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c5ae8f0-bfa8-4fe2-81c3-289021674179-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122669 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c5ae8f0-bfa8-4fe2-81c3-289021674179-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122705 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c5ae8f0-bfa8-4fe2-81c3-289021674179-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122743 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.122784 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224104 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c5ae8f0-bfa8-4fe2-81c3-289021674179-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224193 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c5ae8f0-bfa8-4fe2-81c3-289021674179-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224237 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c5ae8f0-bfa8-4fe2-81c3-289021674179-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224270 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224330 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224392 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224439 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c5ae8f0-bfa8-4fe2-81c3-289021674179-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224488 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224540 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdjkf\" (UniqueName: \"kubernetes.io/projected/6c5ae8f0-bfa8-4fe2-81c3-289021674179-kube-api-access-qdjkf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224573 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.224622 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c5ae8f0-bfa8-4fe2-81c3-289021674179-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.226261 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/6c5ae8f0-bfa8-4fe2-81c3-289021674179-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.227136 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6c5ae8f0-bfa8-4fe2-81c3-289021674179-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.235010 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.263940 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/6c5ae8f0-bfa8-4fe2-81c3-289021674179-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.266553 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.267892 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.293735 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/6c5ae8f0-bfa8-4fe2-81c3-289021674179-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.293877 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.296480 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/6c5ae8f0-bfa8-4fe2-81c3-289021674179-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.317319 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdjkf\" (UniqueName: \"kubernetes.io/projected/6c5ae8f0-bfa8-4fe2-81c3-289021674179-kube-api-access-qdjkf\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.324965 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/6c5ae8f0-bfa8-4fe2-81c3-289021674179-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.335695 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"6c5ae8f0-bfa8-4fe2-81c3-289021674179\") " pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.398356 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 09:02:57 crc kubenswrapper[4786]: I1209 09:02:57.468220 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.092954 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59","Type":"ContainerStarted","Data":"7c3cb46617b32de57a9bec1cd0b29a29b7a49e2e7057e77511d7d4466d165e2d"} Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.097168 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.099038 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" event={"ID":"0cdd4a6b-af46-4028-b5de-b67767db09fc","Type":"ContainerStarted","Data":"cd4ade2787a7261f97a0a58b1579c837bec0efc6856f597877a7ab5e424c0866"} Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.099157 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.105030 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.105242 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.105533 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.105722 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ql787" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.126212 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.126216 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.129072 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.231437 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/673b3525-c496-4268-b9f9-c37f5175efdc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.231488 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/673b3525-c496-4268-b9f9-c37f5175efdc-kolla-config\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.231518 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/673b3525-c496-4268-b9f9-c37f5175efdc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.231683 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/673b3525-c496-4268-b9f9-c37f5175efdc-secrets\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.231733 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/673b3525-c496-4268-b9f9-c37f5175efdc-config-data-default\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.231775 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b85mr\" (UniqueName: \"kubernetes.io/projected/673b3525-c496-4268-b9f9-c37f5175efdc-kube-api-access-b85mr\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.232089 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.232130 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/673b3525-c496-4268-b9f9-c37f5175efdc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.232161 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/673b3525-c496-4268-b9f9-c37f5175efdc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.311991 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.333975 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.334378 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.334668 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/673b3525-c496-4268-b9f9-c37f5175efdc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.334735 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/673b3525-c496-4268-b9f9-c37f5175efdc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.334842 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/673b3525-c496-4268-b9f9-c37f5175efdc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.334868 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/673b3525-c496-4268-b9f9-c37f5175efdc-kolla-config\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.334934 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/673b3525-c496-4268-b9f9-c37f5175efdc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.335034 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/673b3525-c496-4268-b9f9-c37f5175efdc-secrets\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.335055 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/673b3525-c496-4268-b9f9-c37f5175efdc-config-data-default\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.335092 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b85mr\" (UniqueName: \"kubernetes.io/projected/673b3525-c496-4268-b9f9-c37f5175efdc-kube-api-access-b85mr\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.335742 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/673b3525-c496-4268-b9f9-c37f5175efdc-config-data-generated\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.336390 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/673b3525-c496-4268-b9f9-c37f5175efdc-kolla-config\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.336987 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/673b3525-c496-4268-b9f9-c37f5175efdc-operator-scripts\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.337175 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/673b3525-c496-4268-b9f9-c37f5175efdc-config-data-default\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.356353 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/673b3525-c496-4268-b9f9-c37f5175efdc-secrets\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.356882 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/673b3525-c496-4268-b9f9-c37f5175efdc-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.369261 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/673b3525-c496-4268-b9f9-c37f5175efdc-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.377550 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b85mr\" (UniqueName: \"kubernetes.io/projected/673b3525-c496-4268-b9f9-c37f5175efdc-kube-api-access-b85mr\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.470284 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-0\" (UID: \"673b3525-c496-4268-b9f9-c37f5175efdc\") " pod="openstack/openstack-galera-0" Dec 09 09:02:58 crc kubenswrapper[4786]: I1209 09:02:58.821148 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.079242 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 09 09:02:59 crc kubenswrapper[4786]: W1209 09:02:59.160144 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c5ae8f0_bfa8_4fe2_81c3_289021674179.slice/crio-fcba4ca1374c0b00fb59836c62a1369ac675a85f7f1dda624844ccc22e06082a WatchSource:0}: Error finding container fcba4ca1374c0b00fb59836c62a1369ac675a85f7f1dda624844ccc22e06082a: Status 404 returned error can't find the container with id fcba4ca1374c0b00fb59836c62a1369ac675a85f7f1dda624844ccc22e06082a Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.178331 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01efacfa-e002-4e0d-aa6b-91217baa22ca","Type":"ContainerStarted","Data":"8186ac8d4f82381ea4ed7a940a2044bf5cee8bc12517a4da2b569593792bb6bf"} Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.547600 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.549896 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.553574 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.553872 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-ppz9q" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.554209 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.554485 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.567069 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.673381 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.673747 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.673793 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4n2q\" (UniqueName: \"kubernetes.io/projected/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-kube-api-access-q4n2q\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.673831 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.674016 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.674112 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.674322 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.674454 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.674617 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.783409 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4n2q\" (UniqueName: \"kubernetes.io/projected/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-kube-api-access-q4n2q\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.783518 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.783551 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.783578 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.783619 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.783660 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.783717 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.783818 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.783862 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.784268 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.785144 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.785768 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.786613 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.786974 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.840240 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.841959 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.843161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.844375 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4n2q\" (UniqueName: \"kubernetes.io/projected/8066cc20-76cd-4a47-a662-fb77cd5cbe3b-kube-api-access-q4n2q\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.913291 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"8066cc20-76cd-4a47-a662-fb77cd5cbe3b\") " pod="openstack/openstack-cell1-galera-0" Dec 09 09:02:59 crc kubenswrapper[4786]: I1209 09:02:59.934365 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.144443 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.206495 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.207983 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.210982 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.211370 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-85g5f" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.212465 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.221390 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.290719 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"6c5ae8f0-bfa8-4fe2-81c3-289021674179","Type":"ContainerStarted","Data":"fcba4ca1374c0b00fb59836c62a1369ac675a85f7f1dda624844ccc22e06082a"} Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.326092 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2z27\" (UniqueName: \"kubernetes.io/projected/77b48dc5-f201-422e-9983-368555119d75-kube-api-access-z2z27\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.326180 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77b48dc5-f201-422e-9983-368555119d75-config-data\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.326201 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b48dc5-f201-422e-9983-368555119d75-combined-ca-bundle\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.326296 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/77b48dc5-f201-422e-9983-368555119d75-memcached-tls-certs\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.326348 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77b48dc5-f201-422e-9983-368555119d75-kolla-config\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.432231 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2z27\" (UniqueName: \"kubernetes.io/projected/77b48dc5-f201-422e-9983-368555119d75-kube-api-access-z2z27\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.432374 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77b48dc5-f201-422e-9983-368555119d75-config-data\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.432402 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b48dc5-f201-422e-9983-368555119d75-combined-ca-bundle\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.433334 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/77b48dc5-f201-422e-9983-368555119d75-config-data\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.433695 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/77b48dc5-f201-422e-9983-368555119d75-memcached-tls-certs\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.433840 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77b48dc5-f201-422e-9983-368555119d75-kolla-config\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.435688 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/77b48dc5-f201-422e-9983-368555119d75-kolla-config\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.441199 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/77b48dc5-f201-422e-9983-368555119d75-memcached-tls-certs\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.443732 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77b48dc5-f201-422e-9983-368555119d75-combined-ca-bundle\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.475286 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2z27\" (UniqueName: \"kubernetes.io/projected/77b48dc5-f201-422e-9983-368555119d75-kube-api-access-z2z27\") pod \"memcached-0\" (UID: \"77b48dc5-f201-422e-9983-368555119d75\") " pod="openstack/memcached-0" Dec 09 09:03:00 crc kubenswrapper[4786]: I1209 09:03:00.551432 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 09 09:03:01 crc kubenswrapper[4786]: I1209 09:03:01.204099 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 09 09:03:01 crc kubenswrapper[4786]: W1209 09:03:01.213618 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8066cc20_76cd_4a47_a662_fb77cd5cbe3b.slice/crio-5c06a3dd969a448459ef9927e8299c7f928efdbc4cb94a402a8fa085ec33a9e5 WatchSource:0}: Error finding container 5c06a3dd969a448459ef9927e8299c7f928efdbc4cb94a402a8fa085ec33a9e5: Status 404 returned error can't find the container with id 5c06a3dd969a448459ef9927e8299c7f928efdbc4cb94a402a8fa085ec33a9e5 Dec 09 09:03:01 crc kubenswrapper[4786]: I1209 09:03:01.369166 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"673b3525-c496-4268-b9f9-c37f5175efdc","Type":"ContainerStarted","Data":"2ff2becf25f680edfa679a3ef4bb32750ae913e1ef341892fbccba9a2737263a"} Dec 09 09:03:01 crc kubenswrapper[4786]: I1209 09:03:01.370788 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8066cc20-76cd-4a47-a662-fb77cd5cbe3b","Type":"ContainerStarted","Data":"5c06a3dd969a448459ef9927e8299c7f928efdbc4cb94a402a8fa085ec33a9e5"} Dec 09 09:03:01 crc kubenswrapper[4786]: I1209 09:03:01.529569 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 09 09:03:02 crc kubenswrapper[4786]: I1209 09:03:02.482511 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"77b48dc5-f201-422e-9983-368555119d75","Type":"ContainerStarted","Data":"9696b7d7af71ca2ee8c3c83aec182b22b3e5a6dc7c2bc6ef46ebbf5eedacfd59"} Dec 09 09:03:02 crc kubenswrapper[4786]: I1209 09:03:02.667293 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 09:03:02 crc kubenswrapper[4786]: I1209 09:03:02.669249 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 09:03:02 crc kubenswrapper[4786]: I1209 09:03:02.672223 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-r5j8l" Dec 09 09:03:02 crc kubenswrapper[4786]: I1209 09:03:02.679657 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 09:03:02 crc kubenswrapper[4786]: I1209 09:03:02.751705 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlxb5\" (UniqueName: \"kubernetes.io/projected/22607f78-204a-4fb8-82d6-53d3f878a984-kube-api-access-xlxb5\") pod \"kube-state-metrics-0\" (UID: \"22607f78-204a-4fb8-82d6-53d3f878a984\") " pod="openstack/kube-state-metrics-0" Dec 09 09:03:02 crc kubenswrapper[4786]: I1209 09:03:02.862380 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlxb5\" (UniqueName: \"kubernetes.io/projected/22607f78-204a-4fb8-82d6-53d3f878a984-kube-api-access-xlxb5\") pod \"kube-state-metrics-0\" (UID: \"22607f78-204a-4fb8-82d6-53d3f878a984\") " pod="openstack/kube-state-metrics-0" Dec 09 09:03:03 crc kubenswrapper[4786]: I1209 09:03:03.024705 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlxb5\" (UniqueName: \"kubernetes.io/projected/22607f78-204a-4fb8-82d6-53d3f878a984-kube-api-access-xlxb5\") pod \"kube-state-metrics-0\" (UID: \"22607f78-204a-4fb8-82d6-53d3f878a984\") " pod="openstack/kube-state-metrics-0" Dec 09 09:03:03 crc kubenswrapper[4786]: I1209 09:03:03.035053 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.057836 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.060928 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.067938 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.068272 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.068445 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.068739 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.068941 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7szbp" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.069132 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.148377 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.246805 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6c7ace95-341e-4733-af87-8c256ae0d9e6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.246893 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.246953 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.247006 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vks\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-kube-api-access-p8vks\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.247063 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6c7ace95-341e-4733-af87-8c256ae0d9e6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.247119 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-config\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.247145 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.247194 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.348838 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-config\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.348890 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.348935 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.349363 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6c7ace95-341e-4733-af87-8c256ae0d9e6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.351659 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.351780 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.351879 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vks\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-kube-api-access-p8vks\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.351996 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6c7ace95-341e-4733-af87-8c256ae0d9e6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.353553 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6c7ace95-341e-4733-af87-8c256ae0d9e6-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.356636 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.356684 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8cfbd567b829700af2d6ada5c2de407e7db840737c905f740b19ad0b115df38c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.358264 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.359144 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6c7ace95-341e-4733-af87-8c256ae0d9e6-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.363739 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-config\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.363828 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.374596 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.377309 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vks\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-kube-api-access-p8vks\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.382211 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.402410 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.456587 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:03:04 crc kubenswrapper[4786]: I1209 09:03:04.526363 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22607f78-204a-4fb8-82d6-53d3f878a984","Type":"ContainerStarted","Data":"48ddc74ac903466f2fe9d9cf4806046d8832c0410ea0c389def7ea93699b2ba2"} Dec 09 09:03:06 crc kubenswrapper[4786]: I1209 09:03:06.792019 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.288695 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vv7k4"] Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.290152 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.295710 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.296461 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-s6bgw" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.296598 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.309223 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-lqv95"] Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.316801 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.328355 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vv7k4"] Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.336168 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lqv95"] Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.374200 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7cad500b-e392-4774-a524-02587da67379-var-run-ovn\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.374330 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cad500b-e392-4774-a524-02587da67379-ovn-controller-tls-certs\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.374389 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnfsk\" (UniqueName: \"kubernetes.io/projected/7cad500b-e392-4774-a524-02587da67379-kube-api-access-vnfsk\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.374416 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cad500b-e392-4774-a524-02587da67379-combined-ca-bundle\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.375816 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cad500b-e392-4774-a524-02587da67379-scripts\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.376376 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7cad500b-e392-4774-a524-02587da67379-var-log-ovn\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.376642 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7cad500b-e392-4774-a524-02587da67379-var-run\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.478771 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vnfsk\" (UniqueName: \"kubernetes.io/projected/7cad500b-e392-4774-a524-02587da67379-kube-api-access-vnfsk\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.478836 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cad500b-e392-4774-a524-02587da67379-combined-ca-bundle\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.478891 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cad500b-e392-4774-a524-02587da67379-scripts\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.478951 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7cad500b-e392-4774-a524-02587da67379-var-log-ovn\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.478984 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-var-run\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.479024 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwbjr\" (UniqueName: \"kubernetes.io/projected/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-kube-api-access-gwbjr\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.479065 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7cad500b-e392-4774-a524-02587da67379-var-run\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.479115 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7cad500b-e392-4774-a524-02587da67379-var-run-ovn\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.479143 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-etc-ovs\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.479177 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-scripts\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.479202 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-var-log\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.479231 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-var-lib\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.479271 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cad500b-e392-4774-a524-02587da67379-ovn-controller-tls-certs\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.480859 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/7cad500b-e392-4774-a524-02587da67379-var-log-ovn\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.481006 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/7cad500b-e392-4774-a524-02587da67379-var-run\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.481106 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/7cad500b-e392-4774-a524-02587da67379-var-run-ovn\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.484541 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7cad500b-e392-4774-a524-02587da67379-scripts\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.489309 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/7cad500b-e392-4774-a524-02587da67379-ovn-controller-tls-certs\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.501212 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnfsk\" (UniqueName: \"kubernetes.io/projected/7cad500b-e392-4774-a524-02587da67379-kube-api-access-vnfsk\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.503633 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cad500b-e392-4774-a524-02587da67379-combined-ca-bundle\") pod \"ovn-controller-vv7k4\" (UID: \"7cad500b-e392-4774-a524-02587da67379\") " pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.582559 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-var-run\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.582695 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwbjr\" (UniqueName: \"kubernetes.io/projected/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-kube-api-access-gwbjr\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.582816 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-etc-ovs\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.582868 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-scripts\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.582902 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-var-log\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.582938 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-var-lib\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.583414 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-var-lib\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.583782 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-var-run\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.584555 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-etc-ovs\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.611308 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-scripts\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.613569 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-var-log\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.637717 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vv7k4" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.681716 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwbjr\" (UniqueName: \"kubernetes.io/projected/df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0-kube-api-access-gwbjr\") pod \"ovn-controller-ovs-lqv95\" (UID: \"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0\") " pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.923103 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerStarted","Data":"92391857df5eb37fb4b3103576c043075717207ae822fd33e997589e60e8e055"} Dec 09 09:03:07 crc kubenswrapper[4786]: I1209 09:03:07.943025 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.130597 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.140498 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.150909 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.151222 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.151603 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-tq6rv" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.151757 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.155765 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.193163 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.217461 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.217518 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.217568 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.217591 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.217625 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.217664 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-config\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.217716 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c56q\" (UniqueName: \"kubernetes.io/projected/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-kube-api-access-2c56q\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.217795 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.325154 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.325556 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.325601 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.325661 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.325690 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.325723 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.325778 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-config\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.325841 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c56q\" (UniqueName: \"kubernetes.io/projected/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-kube-api-access-2c56q\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.325905 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.327237 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-config\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.327600 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.327957 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.365240 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.367363 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.370053 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.380960 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c56q\" (UniqueName: \"kubernetes.io/projected/dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f-kube-api-access-2c56q\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.529497 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-sb-0\" (UID: \"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f\") " pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:08 crc kubenswrapper[4786]: I1209 09:03:08.575357 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.282804 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vv7k4"] Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.436254 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-lqv95"] Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.593323 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-s6vc4"] Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.594588 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.607566 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.678334 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283c4e6d-aae7-4c99-97dd-9da311e7efd3-combined-ca-bundle\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.678407 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/283c4e6d-aae7-4c99-97dd-9da311e7efd3-ovn-rundir\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.678471 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gmhs\" (UniqueName: \"kubernetes.io/projected/283c4e6d-aae7-4c99-97dd-9da311e7efd3-kube-api-access-7gmhs\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.678492 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/283c4e6d-aae7-4c99-97dd-9da311e7efd3-ovs-rundir\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.678515 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/283c4e6d-aae7-4c99-97dd-9da311e7efd3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.678573 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/283c4e6d-aae7-4c99-97dd-9da311e7efd3-config\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.714957 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-s6vc4"] Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.779635 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283c4e6d-aae7-4c99-97dd-9da311e7efd3-combined-ca-bundle\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.779695 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/283c4e6d-aae7-4c99-97dd-9da311e7efd3-ovn-rundir\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.779751 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gmhs\" (UniqueName: \"kubernetes.io/projected/283c4e6d-aae7-4c99-97dd-9da311e7efd3-kube-api-access-7gmhs\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.779773 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/283c4e6d-aae7-4c99-97dd-9da311e7efd3-ovs-rundir\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.779798 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/283c4e6d-aae7-4c99-97dd-9da311e7efd3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.779855 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/283c4e6d-aae7-4c99-97dd-9da311e7efd3-config\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.780967 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/283c4e6d-aae7-4c99-97dd-9da311e7efd3-config\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.798446 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/283c4e6d-aae7-4c99-97dd-9da311e7efd3-ovn-rundir\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.799084 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/283c4e6d-aae7-4c99-97dd-9da311e7efd3-ovs-rundir\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.804186 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/283c4e6d-aae7-4c99-97dd-9da311e7efd3-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.804320 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/283c4e6d-aae7-4c99-97dd-9da311e7efd3-combined-ca-bundle\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.844221 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gmhs\" (UniqueName: \"kubernetes.io/projected/283c4e6d-aae7-4c99-97dd-9da311e7efd3-kube-api-access-7gmhs\") pod \"ovn-controller-metrics-s6vc4\" (UID: \"283c4e6d-aae7-4c99-97dd-9da311e7efd3\") " pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:09 crc kubenswrapper[4786]: I1209 09:03:09.936165 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-s6vc4" Dec 09 09:03:10 crc kubenswrapper[4786]: I1209 09:03:10.021699 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vv7k4" event={"ID":"7cad500b-e392-4774-a524-02587da67379","Type":"ContainerStarted","Data":"48940640e76fc6146ed661041cbff2d9712a24b5ec841e9feea8eeafaa1d9f39"} Dec 09 09:03:10 crc kubenswrapper[4786]: I1209 09:03:10.062346 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 09 09:03:10 crc kubenswrapper[4786]: W1209 09:03:10.591619 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd1f4f6b_6ed9_46bc_bf88_bd17fcc6069f.slice/crio-bdca4397519328fdcd22352681e8042db698f732b22fe8373a3b6b7bcf26a571 WatchSource:0}: Error finding container bdca4397519328fdcd22352681e8042db698f732b22fe8373a3b6b7bcf26a571: Status 404 returned error can't find the container with id bdca4397519328fdcd22352681e8042db698f732b22fe8373a3b6b7bcf26a571 Dec 09 09:03:10 crc kubenswrapper[4786]: I1209 09:03:10.808718 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d9f64f497-j4ztp"] Dec 09 09:03:10 crc kubenswrapper[4786]: I1209 09:03:10.833359 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66bd9c5465-lmxcr"] Dec 09 09:03:10 crc kubenswrapper[4786]: I1209 09:03:10.834904 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:10 crc kubenswrapper[4786]: I1209 09:03:10.843394 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 09 09:03:10 crc kubenswrapper[4786]: I1209 09:03:10.862215 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66bd9c5465-lmxcr"] Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.003486 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-ovsdbserver-sb\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.003817 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-dns-svc\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.003847 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-config\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.004011 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpcqw\" (UniqueName: \"kubernetes.io/projected/fd5cc458-6470-474a-b93a-0f2e7a193380-kube-api-access-fpcqw\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.005259 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.008621 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.012976 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-mt65s" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.013826 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.014003 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.014159 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.049136 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 09:03:11 crc kubenswrapper[4786]: W1209 09:03:11.060852 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf80e8f4_2c7e_44a0_b2f4_3c0b652c6fb0.slice/crio-eb42eb048178d15425f7f06ef2916b8976f0003b0752593ae592fc0d6773f39c WatchSource:0}: Error finding container eb42eb048178d15425f7f06ef2916b8976f0003b0752593ae592fc0d6773f39c: Status 404 returned error can't find the container with id eb42eb048178d15425f7f06ef2916b8976f0003b0752593ae592fc0d6773f39c Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.070204 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f","Type":"ContainerStarted","Data":"bdca4397519328fdcd22352681e8042db698f732b22fe8373a3b6b7bcf26a571"} Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.223797 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d6bade-9172-4b73-8879-9f23d0834b93-config\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.223857 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-ovsdbserver-sb\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.223904 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89d6bade-9172-4b73-8879-9f23d0834b93-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.223936 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-dns-svc\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.223975 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-config\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.224009 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.224041 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbjbl\" (UniqueName: \"kubernetes.io/projected/89d6bade-9172-4b73-8879-9f23d0834b93-kube-api-access-gbjbl\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.224102 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89d6bade-9172-4b73-8879-9f23d0834b93-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.224136 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpcqw\" (UniqueName: \"kubernetes.io/projected/fd5cc458-6470-474a-b93a-0f2e7a193380-kube-api-access-fpcqw\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.224175 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d6bade-9172-4b73-8879-9f23d0834b93-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.224207 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89d6bade-9172-4b73-8879-9f23d0834b93-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.224231 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89d6bade-9172-4b73-8879-9f23d0834b93-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.225275 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-ovsdbserver-sb\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.226504 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-dns-svc\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.227142 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-config\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.294071 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpcqw\" (UniqueName: \"kubernetes.io/projected/fd5cc458-6470-474a-b93a-0f2e7a193380-kube-api-access-fpcqw\") pod \"dnsmasq-dns-66bd9c5465-lmxcr\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.325624 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.325667 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbjbl\" (UniqueName: \"kubernetes.io/projected/89d6bade-9172-4b73-8879-9f23d0834b93-kube-api-access-gbjbl\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.325716 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89d6bade-9172-4b73-8879-9f23d0834b93-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.325756 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d6bade-9172-4b73-8879-9f23d0834b93-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.325782 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89d6bade-9172-4b73-8879-9f23d0834b93-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.325799 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89d6bade-9172-4b73-8879-9f23d0834b93-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.325833 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d6bade-9172-4b73-8879-9f23d0834b93-config\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.325882 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89d6bade-9172-4b73-8879-9f23d0834b93-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.326517 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/89d6bade-9172-4b73-8879-9f23d0834b93-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.326854 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.330732 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89d6bade-9172-4b73-8879-9f23d0834b93-config\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.332108 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/89d6bade-9172-4b73-8879-9f23d0834b93-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.335851 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/89d6bade-9172-4b73-8879-9f23d0834b93-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.349674 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/89d6bade-9172-4b73-8879-9f23d0834b93-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.354485 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbjbl\" (UniqueName: \"kubernetes.io/projected/89d6bade-9172-4b73-8879-9f23d0834b93-kube-api-access-gbjbl\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.360287 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d6bade-9172-4b73-8879-9f23d0834b93-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.375343 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"89d6bade-9172-4b73-8879-9f23d0834b93\") " pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.380992 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 09 09:03:11 crc kubenswrapper[4786]: I1209 09:03:11.476951 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:03:12 crc kubenswrapper[4786]: I1209 09:03:12.127881 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lqv95" event={"ID":"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0","Type":"ContainerStarted","Data":"eb42eb048178d15425f7f06ef2916b8976f0003b0752593ae592fc0d6773f39c"} Dec 09 09:03:28 crc kubenswrapper[4786]: I1209 09:03:28.294757 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-hq5kf" podUID="6e123ec9-00ea-466d-b5f6-79cad587a2cc" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:03:38 crc kubenswrapper[4786]: I1209 09:03:38.033077 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-s6vc4"] Dec 09 09:03:38 crc kubenswrapper[4786]: E1209 09:03:38.159576 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:373ae7566a35eae17edec61b810960b5481512a01a5b4b16921dc97ee277a4a1: Get \"http://38.102.83.200:5001/v2/podified-master-centos10/openstack-ovn-base/blobs/sha256:373ae7566a35eae17edec61b810960b5481512a01a5b4b16921dc97ee277a4a1\": context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Dec 09 09:03:38 crc kubenswrapper[4786]: E1209 09:03:38.159671 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = reading blob sha256:373ae7566a35eae17edec61b810960b5481512a01a5b4b16921dc97ee277a4a1: Get \"http://38.102.83.200:5001/v2/podified-master-centos10/openstack-ovn-base/blobs/sha256:373ae7566a35eae17edec61b810960b5481512a01a5b4b16921dc97ee277a4a1\": context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-ovn-base:watcher_latest" Dec 09 09:03:38 crc kubenswrapper[4786]: E1209 09:03:38.159950 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:38.102.83.200:5001/podified-master-centos10/openstack-ovn-base:watcher_latest,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf6h64fh668h7dh78h5fdh67ch5d7h5cchd8h97h68h687h654h94h5c9h667h5dfh87h5fh9chcfh599h64chd7h59h58bh89h5dfh5b8h56ch5d5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gwbjr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-lqv95_openstack(df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:373ae7566a35eae17edec61b810960b5481512a01a5b4b16921dc97ee277a4a1: Get \"http://38.102.83.200:5001/v2/podified-master-centos10/openstack-ovn-base/blobs/sha256:373ae7566a35eae17edec61b810960b5481512a01a5b4b16921dc97ee277a4a1\": context canceled" logger="UnhandledError" Dec 09 09:03:38 crc kubenswrapper[4786]: E1209 09:03:38.161143 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:373ae7566a35eae17edec61b810960b5481512a01a5b4b16921dc97ee277a4a1: Get \\\"http://38.102.83.200:5001/v2/podified-master-centos10/openstack-ovn-base/blobs/sha256:373ae7566a35eae17edec61b810960b5481512a01a5b4b16921dc97ee277a4a1\\\": context canceled\"" pod="openstack/ovn-controller-ovs-lqv95" podUID="df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0" Dec 09 09:03:38 crc kubenswrapper[4786]: W1209 09:03:38.208201 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod283c4e6d_aae7_4c99_97dd_9da311e7efd3.slice/crio-24bc6b7ccaa8ee84c63c76cc4956e39a4257dbd3068e80a6619f809386ea48de WatchSource:0}: Error finding container 24bc6b7ccaa8ee84c63c76cc4956e39a4257dbd3068e80a6619f809386ea48de: Status 404 returned error can't find the container with id 24bc6b7ccaa8ee84c63c76cc4956e39a4257dbd3068e80a6619f809386ea48de Dec 09 09:03:38 crc kubenswrapper[4786]: I1209 09:03:38.507228 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-s6vc4" event={"ID":"283c4e6d-aae7-4c99-97dd-9da311e7efd3","Type":"ContainerStarted","Data":"24bc6b7ccaa8ee84c63c76cc4956e39a4257dbd3068e80a6619f809386ea48de"} Dec 09 09:03:38 crc kubenswrapper[4786]: E1209 09:03:38.508296 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-ovn-base:watcher_latest\\\"\"" pod="openstack/ovn-controller-ovs-lqv95" podUID="df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.392153 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.392481 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.392619 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdjkf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(6c5ae8f0-bfa8-4fe2-81c3-289021674179): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.393794 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-notifications-server-0" podUID="6c5ae8f0-bfa8-4fe2-81c3-289021674179" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.401021 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.401074 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.401194 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9r7v7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(b0a91d0e-2d71-4fdc-8d68-953a12dc7f59): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.402378 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.517445 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="6c5ae8f0-bfa8-4fe2-81c3-289021674179" Dec 09 09:03:39 crc kubenswrapper[4786]: E1209 09:03:39.520891 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-server-0" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" Dec 09 09:03:43 crc kubenswrapper[4786]: I1209 09:03:43.289787 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.301773 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.303021 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest" Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.303327 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbptq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(01efacfa-e002-4e0d-aa6b-91217baa22ca): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.304769 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.308838 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.308920 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.309168 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v78km,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5949c6db59-hgk6z_openstack(0cdd4a6b-af46-4028-b5de-b67767db09fc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.310355 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" podUID="0cdd4a6b-af46-4028-b5de-b67767db09fc" Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.710170 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest\\\"\"" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" podUID="0cdd4a6b-af46-4028-b5de-b67767db09fc" Dec 09 09:03:54 crc kubenswrapper[4786]: E1209 09:03:54.710242 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-rabbitmq:watcher_latest\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" Dec 09 09:03:54 crc kubenswrapper[4786]: I1209 09:03:54.988983 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:03:54 crc kubenswrapper[4786]: I1209 09:03:54.989127 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.060440 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.061613 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.061903 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.200:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b85mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(673b3525-c496-4268-b9f9-c37f5175efdc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.063141 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="673b3525-c496-4268-b9f9-c37f5175efdc" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.079871 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.079971 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-mariadb:watcher_latest" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.080164 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:38.102.83.200:5001/podified-master-centos10/openstack-mariadb:watcher_latest,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:DB_ROOT_PASSWORD,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:DbRootPassword,Optional:nil,},},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secrets,ReadOnly:true,MountPath:/var/lib/secrets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4n2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(8066cc20-76cd-4a47-a662-fb77cd5cbe3b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.081390 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="8066cc20-76cd-4a47-a662-fb77cd5cbe3b" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.730247 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="8066cc20-76cd-4a47-a662-fb77cd5cbe3b" Dec 09 09:03:56 crc kubenswrapper[4786]: E1209 09:03:56.730323 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-mariadb:watcher_latest\\\"\"" pod="openstack/openstack-galera-0" podUID="673b3525-c496-4268-b9f9-c37f5175efdc" Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.327531 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-memcached:watcher_latest" Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.328384 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-memcached:watcher_latest" Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.328675 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:38.102.83.200:5001/podified-master-centos10/openstack-memcached:watcher_latest,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n666h676h55dh54fh5c6h5dchc7h66bh694h695h85h54ch65fh67hfdhdbh546h5d6hd4h685h5fdh5c5h6h686h678hc9hf9h5b6h5d9h5b6h59bh689q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z2z27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(77b48dc5-f201-422e-9983-368555119d75): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.330792 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="77b48dc5-f201-422e-9983-368555119d75" Dec 09 09:03:57 crc kubenswrapper[4786]: I1209 09:03:57.624793 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66bd9c5465-lmxcr"] Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.675380 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.675467 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest" Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.675679 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:38.102.83.200:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nf6h64fh668h7dh78h5fdh67ch5d7h5cchd8h97h68h687h654h94h5c9h667h5dfh87h5fh9chcfh599h64chd7h59h58bh89h5dfh5b8h56ch5d5q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vnfsk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-vv7k4_openstack(7cad500b-e392-4774-a524-02587da67379): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.676864 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-vv7k4" podUID="7cad500b-e392-4774-a524-02587da67379" Dec 09 09:03:57 crc kubenswrapper[4786]: I1209 09:03:57.735644 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"89d6bade-9172-4b73-8879-9f23d0834b93","Type":"ContainerStarted","Data":"995f7d693b3fb5a2f27b6f5b97cd92ca4ab48c59704e826194dbfd1a544c9c5d"} Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.737804 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-ovn-controller:watcher_latest\\\"\"" pod="openstack/ovn-controller-vv7k4" podUID="7cad500b-e392-4774-a524-02587da67379" Dec 09 09:03:57 crc kubenswrapper[4786]: E1209 09:03:57.737960 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-memcached:watcher_latest\\\"\"" pod="openstack/memcached-0" podUID="77b48dc5-f201-422e-9983-368555119d75" Dec 09 09:03:59 crc kubenswrapper[4786]: W1209 09:03:59.698238 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfd5cc458_6470_474a_b93a_0f2e7a193380.slice/crio-c7ea330615ccb34e24a6268b2ee0a782e054f9de20feaadced2f252e87bc54a3 WatchSource:0}: Error finding container c7ea330615ccb34e24a6268b2ee0a782e054f9de20feaadced2f252e87bc54a3: Status 404 returned error can't find the container with id c7ea330615ccb34e24a6268b2ee0a782e054f9de20feaadced2f252e87bc54a3 Dec 09 09:03:59 crc kubenswrapper[4786]: I1209 09:03:59.751984 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" event={"ID":"fd5cc458-6470-474a-b93a-0f2e7a193380","Type":"ContainerStarted","Data":"c7ea330615ccb34e24a6268b2ee0a782e054f9de20feaadced2f252e87bc54a3"} Dec 09 09:03:59 crc kubenswrapper[4786]: E1209 09:03:59.782715 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:03:59 crc kubenswrapper[4786]: E1209 09:03:59.782777 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:03:59 crc kubenswrapper[4786]: E1209 09:03:59.782914 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q8845,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8469f54845-pxjx2_openstack(eabff6c7-3d99-4392-956b-2e687c890c08): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:59 crc kubenswrapper[4786]: E1209 09:03:59.784111 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8469f54845-pxjx2" podUID="eabff6c7-3d99-4392-956b-2e687c890c08" Dec 09 09:03:59 crc kubenswrapper[4786]: E1209 09:03:59.798723 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:03:59 crc kubenswrapper[4786]: E1209 09:03:59.798791 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:03:59 crc kubenswrapper[4786]: E1209 09:03:59.799039 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qq9sf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5fccf8cfdc-lrx7d_openstack(ef1296a6-369d-4b56-add2-08afba4b5efa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:03:59 crc kubenswrapper[4786]: E1209 09:03:59.800547 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" podUID="ef1296a6-369d-4b56-add2-08afba4b5efa" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.037222 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.037286 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.037552 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:38.102.83.200:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n6ch7bh565h55ch598h578hcbh5d6hd9h5h55bhd7h5ffhffh56ch545h679h64bh5c4hfdh98hcbh5cdhb8h57dhd9hd8h5dbh9fhfbh699hbcq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2c56q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.048635 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.048700 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.048944 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n45wx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6c45c59f45-x5hnh_openstack(d280b9a4-91b5-4d8e-9a9d-f11bb91566dd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.050261 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" podUID="d280b9a4-91b5-4d8e-9a9d-f11bb91566dd" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.895116 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.895839 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.896045 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:38.102.83.200:5001/podified-master-centos10/openstack-neutron-server:watcher_latest,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-962dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-7d9f64f497-j4ztp_openstack(09918a13-9fef-4347-b2cf-e6d1aa9bb29a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:04:00 crc kubenswrapper[4786]: E1209 09:04:00.897294 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" podUID="09918a13-9fef-4347-b2cf-e6d1aa9bb29a" Dec 09 09:04:01 crc kubenswrapper[4786]: E1209 09:04:01.672699 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified" Dec 09 09:04:01 crc kubenswrapper[4786]: E1209 09:04:01.672877 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:ndch67bh688h56bh64h94h54fh64dh58dh546h59ch64ch8bh578h95h5bdh684h58h5b8h66ch54chdfh559h7ch557h7fh5c8h56chc4h59ch5ffh68cq,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gmhs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-s6vc4_openstack(283c4e6d-aae7-4c99-97dd-9da311e7efd3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:04:01 crc kubenswrapper[4786]: E1209 09:04:01.674083 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-metrics-s6vc4" podUID="283c4e6d-aae7-4c99-97dd-9da311e7efd3" Dec 09 09:04:01 crc kubenswrapper[4786]: E1209 09:04:01.769134 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"" pod="openstack/ovn-controller-metrics-s6vc4" podUID="283c4e6d-aae7-4c99-97dd-9da311e7efd3" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.618038 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.624837 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.634706 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.647891 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.744851 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-config\") pod \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.744923 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-dns-svc\") pod \"ef1296a6-369d-4b56-add2-08afba4b5efa\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.744970 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-dns-svc\") pod \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.745000 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qq9sf\" (UniqueName: \"kubernetes.io/projected/ef1296a6-369d-4b56-add2-08afba4b5efa-kube-api-access-qq9sf\") pod \"ef1296a6-369d-4b56-add2-08afba4b5efa\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.745052 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8845\" (UniqueName: \"kubernetes.io/projected/eabff6c7-3d99-4392-956b-2e687c890c08-kube-api-access-q8845\") pod \"eabff6c7-3d99-4392-956b-2e687c890c08\" (UID: \"eabff6c7-3d99-4392-956b-2e687c890c08\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.745125 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-config\") pod \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.745155 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-962dc\" (UniqueName: \"kubernetes.io/projected/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-kube-api-access-962dc\") pod \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.745190 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n45wx\" (UniqueName: \"kubernetes.io/projected/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-kube-api-access-n45wx\") pod \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\" (UID: \"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.745234 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-dns-svc\") pod \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\" (UID: \"09918a13-9fef-4347-b2cf-e6d1aa9bb29a\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.745269 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-config\") pod \"ef1296a6-369d-4b56-add2-08afba4b5efa\" (UID: \"ef1296a6-369d-4b56-add2-08afba4b5efa\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.745315 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabff6c7-3d99-4392-956b-2e687c890c08-config\") pod \"eabff6c7-3d99-4392-956b-2e687c890c08\" (UID: \"eabff6c7-3d99-4392-956b-2e687c890c08\") " Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.745661 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-config" (OuterVolumeSpecName: "config") pod "d280b9a4-91b5-4d8e-9a9d-f11bb91566dd" (UID: "d280b9a4-91b5-4d8e-9a9d-f11bb91566dd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.746203 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-config" (OuterVolumeSpecName: "config") pod "09918a13-9fef-4347-b2cf-e6d1aa9bb29a" (UID: "09918a13-9fef-4347-b2cf-e6d1aa9bb29a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.746218 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d280b9a4-91b5-4d8e-9a9d-f11bb91566dd" (UID: "d280b9a4-91b5-4d8e-9a9d-f11bb91566dd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.746452 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09918a13-9fef-4347-b2cf-e6d1aa9bb29a" (UID: "09918a13-9fef-4347-b2cf-e6d1aa9bb29a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.746318 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-config" (OuterVolumeSpecName: "config") pod "ef1296a6-369d-4b56-add2-08afba4b5efa" (UID: "ef1296a6-369d-4b56-add2-08afba4b5efa"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.746773 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.746840 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabff6c7-3d99-4392-956b-2e687c890c08-config" (OuterVolumeSpecName: "config") pod "eabff6c7-3d99-4392-956b-2e687c890c08" (UID: "eabff6c7-3d99-4392-956b-2e687c890c08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.746854 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ef1296a6-369d-4b56-add2-08afba4b5efa" (UID: "ef1296a6-369d-4b56-add2-08afba4b5efa"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.750677 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-kube-api-access-962dc" (OuterVolumeSpecName: "kube-api-access-962dc") pod "09918a13-9fef-4347-b2cf-e6d1aa9bb29a" (UID: "09918a13-9fef-4347-b2cf-e6d1aa9bb29a"). InnerVolumeSpecName "kube-api-access-962dc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.750748 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-kube-api-access-n45wx" (OuterVolumeSpecName: "kube-api-access-n45wx") pod "d280b9a4-91b5-4d8e-9a9d-f11bb91566dd" (UID: "d280b9a4-91b5-4d8e-9a9d-f11bb91566dd"). InnerVolumeSpecName "kube-api-access-n45wx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.750777 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabff6c7-3d99-4392-956b-2e687c890c08-kube-api-access-q8845" (OuterVolumeSpecName: "kube-api-access-q8845") pod "eabff6c7-3d99-4392-956b-2e687c890c08" (UID: "eabff6c7-3d99-4392-956b-2e687c890c08"). InnerVolumeSpecName "kube-api-access-q8845". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.750800 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1296a6-369d-4b56-add2-08afba4b5efa-kube-api-access-qq9sf" (OuterVolumeSpecName: "kube-api-access-qq9sf") pod "ef1296a6-369d-4b56-add2-08afba4b5efa" (UID: "ef1296a6-369d-4b56-add2-08afba4b5efa"). InnerVolumeSpecName "kube-api-access-qq9sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.774772 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.774771 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fccf8cfdc-lrx7d" event={"ID":"ef1296a6-369d-4b56-add2-08afba4b5efa","Type":"ContainerDied","Data":"23ed2c0a5e6c30925b1d72934f106578f6b9bb3063b0fd1cb76cad7b2fae255c"} Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.776595 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" event={"ID":"d280b9a4-91b5-4d8e-9a9d-f11bb91566dd","Type":"ContainerDied","Data":"27978fec90c31aeaa0a1cf0b449e9a1faca49403be69b3e85ffba3cfc32eb900"} Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.776705 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6c45c59f45-x5hnh" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.779047 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" event={"ID":"09918a13-9fef-4347-b2cf-e6d1aa9bb29a","Type":"ContainerDied","Data":"d0b2eb4d65da30be5172599a9a5d7c7cc71f4bdffa610a582e53ee6004248477"} Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.779063 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d9f64f497-j4ztp" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.782514 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8469f54845-pxjx2" event={"ID":"eabff6c7-3d99-4392-956b-2e687c890c08","Type":"ContainerDied","Data":"486132097efa2f8dc6c7a9838e93e4d6707de40ec0eee9dd15eb738f02306967"} Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.782641 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8469f54845-pxjx2" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848857 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848888 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848897 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eabff6c7-3d99-4392-956b-2e687c890c08-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848906 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ef1296a6-369d-4b56-add2-08afba4b5efa-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848915 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848924 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qq9sf\" (UniqueName: \"kubernetes.io/projected/ef1296a6-369d-4b56-add2-08afba4b5efa-kube-api-access-qq9sf\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848936 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8845\" (UniqueName: \"kubernetes.io/projected/eabff6c7-3d99-4392-956b-2e687c890c08-kube-api-access-q8845\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848945 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848954 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-962dc\" (UniqueName: \"kubernetes.io/projected/09918a13-9fef-4347-b2cf-e6d1aa9bb29a-kube-api-access-962dc\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.848963 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n45wx\" (UniqueName: \"kubernetes.io/projected/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd-kube-api-access-n45wx\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.868249 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d9f64f497-j4ztp"] Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.902068 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d9f64f497-j4ztp"] Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.928121 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6c45c59f45-x5hnh"] Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.938230 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6c45c59f45-x5hnh"] Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.962492 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fccf8cfdc-lrx7d"] Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.972080 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fccf8cfdc-lrx7d"] Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.989508 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8469f54845-pxjx2"] Dec 09 09:04:02 crc kubenswrapper[4786]: I1209 09:04:02.995518 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8469f54845-pxjx2"] Dec 09 09:04:03 crc kubenswrapper[4786]: I1209 09:04:03.199703 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09918a13-9fef-4347-b2cf-e6d1aa9bb29a" path="/var/lib/kubelet/pods/09918a13-9fef-4347-b2cf-e6d1aa9bb29a/volumes" Dec 09 09:04:03 crc kubenswrapper[4786]: I1209 09:04:03.200594 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d280b9a4-91b5-4d8e-9a9d-f11bb91566dd" path="/var/lib/kubelet/pods/d280b9a4-91b5-4d8e-9a9d-f11bb91566dd/volumes" Dec 09 09:04:03 crc kubenswrapper[4786]: I1209 09:04:03.201071 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eabff6c7-3d99-4392-956b-2e687c890c08" path="/var/lib/kubelet/pods/eabff6c7-3d99-4392-956b-2e687c890c08/volumes" Dec 09 09:04:03 crc kubenswrapper[4786]: I1209 09:04:03.201518 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1296a6-369d-4b56-add2-08afba4b5efa" path="/var/lib/kubelet/pods/ef1296a6-369d-4b56-add2-08afba4b5efa/volumes" Dec 09 09:04:04 crc kubenswrapper[4786]: E1209 09:04:04.310708 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 09:04:04 crc kubenswrapper[4786]: E1209 09:04:04.310794 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Dec 09 09:04:04 crc kubenswrapper[4786]: E1209 09:04:04.311035 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xlxb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(22607f78-204a-4fb8-82d6-53d3f878a984): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 09:04:04 crc kubenswrapper[4786]: E1209 09:04:04.312346 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="22607f78-204a-4fb8-82d6-53d3f878a984" Dec 09 09:04:04 crc kubenswrapper[4786]: E1209 09:04:04.621081 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f" Dec 09 09:04:04 crc kubenswrapper[4786]: I1209 09:04:04.802915 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerStarted","Data":"2054da1a162171dcb6211e2d02c2495d39c9f333dabfa9adf2ca237eb1bb505e"} Dec 09 09:04:04 crc kubenswrapper[4786]: I1209 09:04:04.806227 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"89d6bade-9172-4b73-8879-9f23d0834b93","Type":"ContainerStarted","Data":"c761c8a37c7b3f64e04d7d13325cc5a542ccd5aa481509d5fed954418e63cbfd"} Dec 09 09:04:04 crc kubenswrapper[4786]: I1209 09:04:04.806277 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"89d6bade-9172-4b73-8879-9f23d0834b93","Type":"ContainerStarted","Data":"319cf73a45ab050310469d1cef5b09a4f0725305bb8eb00b74176bdd27ce3d65"} Dec 09 09:04:04 crc kubenswrapper[4786]: I1209 09:04:04.809165 4786 generic.go:334] "Generic (PLEG): container finished" podID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerID="e5b4749f98a29b2d608c976b30d7541bf24bed4b9e1b050646d5d8fe56686156" exitCode=0 Dec 09 09:04:04 crc kubenswrapper[4786]: I1209 09:04:04.809263 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" event={"ID":"fd5cc458-6470-474a-b93a-0f2e7a193380","Type":"ContainerDied","Data":"e5b4749f98a29b2d608c976b30d7541bf24bed4b9e1b050646d5d8fe56686156"} Dec 09 09:04:04 crc kubenswrapper[4786]: I1209 09:04:04.811233 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f","Type":"ContainerStarted","Data":"8209a23bb8055ebca81392ca61964fb7bf5fec4b58f973df051658be86a0c3ae"} Dec 09 09:04:04 crc kubenswrapper[4786]: E1209 09:04:04.812905 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f" Dec 09 09:04:04 crc kubenswrapper[4786]: I1209 09:04:04.819088 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lqv95" event={"ID":"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0","Type":"ContainerStarted","Data":"a784faba91d84c074f7733339ee73d2816e99e34d8dfc133419310e585095d21"} Dec 09 09:04:04 crc kubenswrapper[4786]: E1209 09:04:04.820668 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="22607f78-204a-4fb8-82d6-53d3f878a984" Dec 09 09:04:04 crc kubenswrapper[4786]: I1209 09:04:04.883288 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=49.107484968 podStartE2EDuration="55.883268309s" podCreationTimestamp="2025-12-09 09:03:09 +0000 UTC" firstStartedPulling="2025-12-09 09:03:57.540011211 +0000 UTC m=+1203.423632437" lastFinishedPulling="2025-12-09 09:04:04.315794552 +0000 UTC m=+1210.199415778" observedRunningTime="2025-12-09 09:04:04.880732206 +0000 UTC m=+1210.764353432" watchObservedRunningTime="2025-12-09 09:04:04.883268309 +0000 UTC m=+1210.766889535" Dec 09 09:04:05 crc kubenswrapper[4786]: I1209 09:04:05.382286 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 09 09:04:05 crc kubenswrapper[4786]: I1209 09:04:05.827285 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" event={"ID":"fd5cc458-6470-474a-b93a-0f2e7a193380","Type":"ContainerStarted","Data":"e60e06e578767a9c248401d2dfe33b99150ef19d7f3797287dd75a27e0ce3410"} Dec 09 09:04:05 crc kubenswrapper[4786]: I1209 09:04:05.827885 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:04:05 crc kubenswrapper[4786]: I1209 09:04:05.829451 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"6c5ae8f0-bfa8-4fe2-81c3-289021674179","Type":"ContainerStarted","Data":"4b7b81266399ea17629bd265894be29c468feb5d1b87d3c08d064b32abc1e0c8"} Dec 09 09:04:05 crc kubenswrapper[4786]: I1209 09:04:05.832441 4786 generic.go:334] "Generic (PLEG): container finished" podID="df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0" containerID="a784faba91d84c074f7733339ee73d2816e99e34d8dfc133419310e585095d21" exitCode=0 Dec 09 09:04:05 crc kubenswrapper[4786]: I1209 09:04:05.832530 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lqv95" event={"ID":"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0","Type":"ContainerDied","Data":"a784faba91d84c074f7733339ee73d2816e99e34d8dfc133419310e585095d21"} Dec 09 09:04:05 crc kubenswrapper[4786]: I1209 09:04:05.835089 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59","Type":"ContainerStarted","Data":"0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0"} Dec 09 09:04:05 crc kubenswrapper[4786]: E1209 09:04:05.836997 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-ovn-sb-db-server:watcher_latest\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f" Dec 09 09:04:05 crc kubenswrapper[4786]: I1209 09:04:05.850228 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" podStartSLOduration=51.239859003 podStartE2EDuration="55.850202588s" podCreationTimestamp="2025-12-09 09:03:10 +0000 UTC" firstStartedPulling="2025-12-09 09:03:59.707970309 +0000 UTC m=+1205.591591535" lastFinishedPulling="2025-12-09 09:04:04.318313894 +0000 UTC m=+1210.201935120" observedRunningTime="2025-12-09 09:04:05.84463216 +0000 UTC m=+1211.728253416" watchObservedRunningTime="2025-12-09 09:04:05.850202588 +0000 UTC m=+1211.733823814" Dec 09 09:04:06 crc kubenswrapper[4786]: I1209 09:04:06.382132 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 09 09:04:06 crc kubenswrapper[4786]: I1209 09:04:06.847448 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lqv95" event={"ID":"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0","Type":"ContainerStarted","Data":"4c942d1889e6298573f6fe6d9122869287b749ba8e89549b1ff45673a444e209"} Dec 09 09:04:06 crc kubenswrapper[4786]: I1209 09:04:06.847845 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-lqv95" event={"ID":"df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0","Type":"ContainerStarted","Data":"7fa7992831e845a85825ccf0cd0123be0d09c7aeff7ddae910f4c3279486d0cf"} Dec 09 09:04:06 crc kubenswrapper[4786]: I1209 09:04:06.849085 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:04:06 crc kubenswrapper[4786]: I1209 09:04:06.849169 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:04:06 crc kubenswrapper[4786]: I1209 09:04:06.873351 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-lqv95" podStartSLOduration=7.710711897 podStartE2EDuration="59.873326537s" podCreationTimestamp="2025-12-09 09:03:07 +0000 UTC" firstStartedPulling="2025-12-09 09:03:11.065345092 +0000 UTC m=+1156.948966318" lastFinishedPulling="2025-12-09 09:04:03.227959722 +0000 UTC m=+1209.111580958" observedRunningTime="2025-12-09 09:04:06.87063282 +0000 UTC m=+1212.754254046" watchObservedRunningTime="2025-12-09 09:04:06.873326537 +0000 UTC m=+1212.756947753" Dec 09 09:04:08 crc kubenswrapper[4786]: I1209 09:04:08.417035 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 09 09:04:09 crc kubenswrapper[4786]: I1209 09:04:09.882299 4786 generic.go:334] "Generic (PLEG): container finished" podID="0cdd4a6b-af46-4028-b5de-b67767db09fc" containerID="6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b" exitCode=0 Dec 09 09:04:09 crc kubenswrapper[4786]: I1209 09:04:09.882365 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" event={"ID":"0cdd4a6b-af46-4028-b5de-b67767db09fc","Type":"ContainerDied","Data":"6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b"} Dec 09 09:04:09 crc kubenswrapper[4786]: I1209 09:04:09.886809 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01efacfa-e002-4e0d-aa6b-91217baa22ca","Type":"ContainerStarted","Data":"707b5d59710fe68771b66a3ff78dabeef8338b89f750f1e932482d67d5771632"} Dec 09 09:04:10 crc kubenswrapper[4786]: I1209 09:04:10.898983 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8066cc20-76cd-4a47-a662-fb77cd5cbe3b","Type":"ContainerStarted","Data":"d57aaab81dbb3633e62c515a1e05c3f7cadaecf2a9261bb55156292380d66535"} Dec 09 09:04:10 crc kubenswrapper[4786]: I1209 09:04:10.903897 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" event={"ID":"0cdd4a6b-af46-4028-b5de-b67767db09fc","Type":"ContainerStarted","Data":"35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e"} Dec 09 09:04:10 crc kubenswrapper[4786]: I1209 09:04:10.904352 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:04:10 crc kubenswrapper[4786]: I1209 09:04:10.906468 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"673b3525-c496-4268-b9f9-c37f5175efdc","Type":"ContainerStarted","Data":"fd3c1374bfa6e3f7c1b5343ee8b8b490d9ddae4ab3d909cc3d9d51eb199ebcb7"} Dec 09 09:04:10 crc kubenswrapper[4786]: I1209 09:04:10.909156 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"77b48dc5-f201-422e-9983-368555119d75","Type":"ContainerStarted","Data":"c03f5f4fc09910b43ddead49461363b167c0a05c404bf37fb068837fa5e89bdd"} Dec 09 09:04:10 crc kubenswrapper[4786]: I1209 09:04:10.909488 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.014912 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.698222705 podStartE2EDuration="1m11.014885805s" podCreationTimestamp="2025-12-09 09:03:00 +0000 UTC" firstStartedPulling="2025-12-09 09:03:01.59037907 +0000 UTC m=+1147.474000286" lastFinishedPulling="2025-12-09 09:04:09.90704216 +0000 UTC m=+1215.790663386" observedRunningTime="2025-12-09 09:04:11.00822852 +0000 UTC m=+1216.891849766" watchObservedRunningTime="2025-12-09 09:04:11.014885805 +0000 UTC m=+1216.898507051" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.050072 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" podStartSLOduration=-9223371960.80473 podStartE2EDuration="1m16.050045435s" podCreationTimestamp="2025-12-09 09:02:55 +0000 UTC" firstStartedPulling="2025-12-09 09:02:57.133798114 +0000 UTC m=+1143.017419330" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:04:11.03086812 +0000 UTC m=+1216.914489366" watchObservedRunningTime="2025-12-09 09:04:11.050045435 +0000 UTC m=+1216.933666701" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.429786 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.479704 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.561118 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5949c6db59-hgk6z"] Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.772844 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cdb9ff747-rqmsj"] Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.774755 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.780047 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.784416 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cdb9ff747-rqmsj"] Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.918720 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vv7k4" event={"ID":"7cad500b-e392-4774-a524-02587da67379","Type":"ContainerStarted","Data":"4cd0a8aa757ae985b231b21dead603c4581e0a84c5caaceaa7377f624ebbbabc"} Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.936023 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-sb\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.936118 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-config\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.936346 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-nb\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.936586 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccp9q\" (UniqueName: \"kubernetes.io/projected/6b6dc383-3ee2-4b70-b43c-7b862f235163-kube-api-access-ccp9q\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.936628 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-dns-svc\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:11 crc kubenswrapper[4786]: I1209 09:04:11.939871 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vv7k4" podStartSLOduration=3.978985266 podStartE2EDuration="1m4.939846485s" podCreationTimestamp="2025-12-09 09:03:07 +0000 UTC" firstStartedPulling="2025-12-09 09:03:09.371466041 +0000 UTC m=+1155.255087267" lastFinishedPulling="2025-12-09 09:04:10.33232727 +0000 UTC m=+1216.215948486" observedRunningTime="2025-12-09 09:04:11.935515878 +0000 UTC m=+1217.819137114" watchObservedRunningTime="2025-12-09 09:04:11.939846485 +0000 UTC m=+1217.823467711" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.038008 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-sb\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.038164 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-config\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.038217 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-nb\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.038330 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccp9q\" (UniqueName: \"kubernetes.io/projected/6b6dc383-3ee2-4b70-b43c-7b862f235163-kube-api-access-ccp9q\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.038354 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-dns-svc\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.039086 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-dns-svc\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.039168 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-sb\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.039881 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-nb\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.040559 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-config\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.061242 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccp9q\" (UniqueName: \"kubernetes.io/projected/6b6dc383-3ee2-4b70-b43c-7b862f235163-kube-api-access-ccp9q\") pod \"dnsmasq-dns-7cdb9ff747-rqmsj\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.140696 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.639215 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-vv7k4" Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.663587 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cdb9ff747-rqmsj"] Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.931253 4786 generic.go:334] "Generic (PLEG): container finished" podID="6b6dc383-3ee2-4b70-b43c-7b862f235163" containerID="553f3cf085c45d3fb7f0468456139608d403a6e17604320137458ba8e39ceeba" exitCode=0 Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.931438 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" event={"ID":"6b6dc383-3ee2-4b70-b43c-7b862f235163","Type":"ContainerDied","Data":"553f3cf085c45d3fb7f0468456139608d403a6e17604320137458ba8e39ceeba"} Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.931526 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" event={"ID":"6b6dc383-3ee2-4b70-b43c-7b862f235163","Type":"ContainerStarted","Data":"e68d0ea2ae657cc107b35c742332a0a5bca4233da1240ba900d20c5d7788315b"} Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.933798 4786 generic.go:334] "Generic (PLEG): container finished" podID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerID="2054da1a162171dcb6211e2d02c2495d39c9f333dabfa9adf2ca237eb1bb505e" exitCode=0 Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.933887 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerDied","Data":"2054da1a162171dcb6211e2d02c2495d39c9f333dabfa9adf2ca237eb1bb505e"} Dec 09 09:04:12 crc kubenswrapper[4786]: I1209 09:04:12.934206 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" podUID="0cdd4a6b-af46-4028-b5de-b67767db09fc" containerName="dnsmasq-dns" containerID="cri-o://35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e" gracePeriod=10 Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.483817 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.574267 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-dns-svc\") pod \"0cdd4a6b-af46-4028-b5de-b67767db09fc\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.574771 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v78km\" (UniqueName: \"kubernetes.io/projected/0cdd4a6b-af46-4028-b5de-b67767db09fc-kube-api-access-v78km\") pod \"0cdd4a6b-af46-4028-b5de-b67767db09fc\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.574809 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-config\") pod \"0cdd4a6b-af46-4028-b5de-b67767db09fc\" (UID: \"0cdd4a6b-af46-4028-b5de-b67767db09fc\") " Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.581848 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cdd4a6b-af46-4028-b5de-b67767db09fc-kube-api-access-v78km" (OuterVolumeSpecName: "kube-api-access-v78km") pod "0cdd4a6b-af46-4028-b5de-b67767db09fc" (UID: "0cdd4a6b-af46-4028-b5de-b67767db09fc"). InnerVolumeSpecName "kube-api-access-v78km". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.619515 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-config" (OuterVolumeSpecName: "config") pod "0cdd4a6b-af46-4028-b5de-b67767db09fc" (UID: "0cdd4a6b-af46-4028-b5de-b67767db09fc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.623026 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0cdd4a6b-af46-4028-b5de-b67767db09fc" (UID: "0cdd4a6b-af46-4028-b5de-b67767db09fc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.677524 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.677561 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v78km\" (UniqueName: \"kubernetes.io/projected/0cdd4a6b-af46-4028-b5de-b67767db09fc-kube-api-access-v78km\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.677571 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0cdd4a6b-af46-4028-b5de-b67767db09fc-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.948470 4786 generic.go:334] "Generic (PLEG): container finished" podID="0cdd4a6b-af46-4028-b5de-b67767db09fc" containerID="35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e" exitCode=0 Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.948556 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.948619 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" event={"ID":"0cdd4a6b-af46-4028-b5de-b67767db09fc","Type":"ContainerDied","Data":"35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e"} Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.948724 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5949c6db59-hgk6z" event={"ID":"0cdd4a6b-af46-4028-b5de-b67767db09fc","Type":"ContainerDied","Data":"cd4ade2787a7261f97a0a58b1579c837bec0efc6856f597877a7ab5e424c0866"} Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.948811 4786 scope.go:117] "RemoveContainer" containerID="35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.951182 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" event={"ID":"6b6dc383-3ee2-4b70-b43c-7b862f235163","Type":"ContainerStarted","Data":"cef57bdf5f7f6e1f86768251f3f0ba436f25d7fca6658345981f3623f162af8d"} Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.951321 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:13 crc kubenswrapper[4786]: I1209 09:04:13.979289 4786 scope.go:117] "RemoveContainer" containerID="6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b" Dec 09 09:04:14 crc kubenswrapper[4786]: I1209 09:04:14.000090 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" podStartSLOduration=3.000029637 podStartE2EDuration="3.000029637s" podCreationTimestamp="2025-12-09 09:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:04:13.974382732 +0000 UTC m=+1219.858003968" watchObservedRunningTime="2025-12-09 09:04:14.000029637 +0000 UTC m=+1219.883650873" Dec 09 09:04:14 crc kubenswrapper[4786]: I1209 09:04:14.026520 4786 scope.go:117] "RemoveContainer" containerID="35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e" Dec 09 09:04:14 crc kubenswrapper[4786]: E1209 09:04:14.027832 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e\": container with ID starting with 35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e not found: ID does not exist" containerID="35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e" Dec 09 09:04:14 crc kubenswrapper[4786]: I1209 09:04:14.027882 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e"} err="failed to get container status \"35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e\": rpc error: code = NotFound desc = could not find container \"35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e\": container with ID starting with 35718d49bcc62f77f28871154c62e6adc6f32b2ec08f9860cc55b448535f827e not found: ID does not exist" Dec 09 09:04:14 crc kubenswrapper[4786]: I1209 09:04:14.027912 4786 scope.go:117] "RemoveContainer" containerID="6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b" Dec 09 09:04:14 crc kubenswrapper[4786]: E1209 09:04:14.028929 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b\": container with ID starting with 6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b not found: ID does not exist" containerID="6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b" Dec 09 09:04:14 crc kubenswrapper[4786]: I1209 09:04:14.029001 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b"} err="failed to get container status \"6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b\": rpc error: code = NotFound desc = could not find container \"6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b\": container with ID starting with 6d58f2ddf0ea55e7a36763e475a4f28122f417d76bd5832079c7468da639bb5b not found: ID does not exist" Dec 09 09:04:14 crc kubenswrapper[4786]: I1209 09:04:14.029838 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5949c6db59-hgk6z"] Dec 09 09:04:14 crc kubenswrapper[4786]: I1209 09:04:14.044595 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5949c6db59-hgk6z"] Dec 09 09:04:15 crc kubenswrapper[4786]: I1209 09:04:15.200381 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cdd4a6b-af46-4028-b5de-b67767db09fc" path="/var/lib/kubelet/pods/0cdd4a6b-af46-4028-b5de-b67767db09fc/volumes" Dec 09 09:04:15 crc kubenswrapper[4786]: I1209 09:04:15.555458 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 09 09:04:15 crc kubenswrapper[4786]: I1209 09:04:15.972913 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-s6vc4" event={"ID":"283c4e6d-aae7-4c99-97dd-9da311e7efd3","Type":"ContainerStarted","Data":"fc3b8f026268f5dddaf07961371915cd0fd834a2da0c04e49d0c39b315e695b9"} Dec 09 09:04:15 crc kubenswrapper[4786]: I1209 09:04:15.993306 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-s6vc4" podStartSLOduration=-9223371969.861496 podStartE2EDuration="1m6.993279363s" podCreationTimestamp="2025-12-09 09:03:09 +0000 UTC" firstStartedPulling="2025-12-09 09:03:38.210358439 +0000 UTC m=+1184.093979665" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:04:15.992905814 +0000 UTC m=+1221.876527080" watchObservedRunningTime="2025-12-09 09:04:15.993279363 +0000 UTC m=+1221.876900589" Dec 09 09:04:19 crc kubenswrapper[4786]: I1209 09:04:19.006868 4786 generic.go:334] "Generic (PLEG): container finished" podID="673b3525-c496-4268-b9f9-c37f5175efdc" containerID="fd3c1374bfa6e3f7c1b5343ee8b8b490d9ddae4ab3d909cc3d9d51eb199ebcb7" exitCode=0 Dec 09 09:04:19 crc kubenswrapper[4786]: I1209 09:04:19.006891 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"673b3525-c496-4268-b9f9-c37f5175efdc","Type":"ContainerDied","Data":"fd3c1374bfa6e3f7c1b5343ee8b8b490d9ddae4ab3d909cc3d9d51eb199ebcb7"} Dec 09 09:04:19 crc kubenswrapper[4786]: I1209 09:04:19.009554 4786 generic.go:334] "Generic (PLEG): container finished" podID="8066cc20-76cd-4a47-a662-fb77cd5cbe3b" containerID="d57aaab81dbb3633e62c515a1e05c3f7cadaecf2a9261bb55156292380d66535" exitCode=0 Dec 09 09:04:19 crc kubenswrapper[4786]: I1209 09:04:19.009590 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8066cc20-76cd-4a47-a662-fb77cd5cbe3b","Type":"ContainerDied","Data":"d57aaab81dbb3633e62c515a1e05c3f7cadaecf2a9261bb55156292380d66535"} Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.142891 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.200891 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66bd9c5465-lmxcr"] Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.201207 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" podUID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerName="dnsmasq-dns" containerID="cri-o://e60e06e578767a9c248401d2dfe33b99150ef19d7f3797287dd75a27e0ce3410" gracePeriod=10 Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.843693 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-775897b69-78frg"] Dec 09 09:04:22 crc kubenswrapper[4786]: E1209 09:04:22.844308 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdd4a6b-af46-4028-b5de-b67767db09fc" containerName="init" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.844345 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdd4a6b-af46-4028-b5de-b67767db09fc" containerName="init" Dec 09 09:04:22 crc kubenswrapper[4786]: E1209 09:04:22.844370 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cdd4a6b-af46-4028-b5de-b67767db09fc" containerName="dnsmasq-dns" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.844379 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cdd4a6b-af46-4028-b5de-b67767db09fc" containerName="dnsmasq-dns" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.844641 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cdd4a6b-af46-4028-b5de-b67767db09fc" containerName="dnsmasq-dns" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.845964 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.872591 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-775897b69-78frg"] Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.889238 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-config\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.889335 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89sjq\" (UniqueName: \"kubernetes.io/projected/d8cb6939-c479-4114-8191-73a18f5f733a-kube-api-access-89sjq\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.889372 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-nb\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.889441 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-sb\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.889477 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-dns-svc\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.991559 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-config\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.991671 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89sjq\" (UniqueName: \"kubernetes.io/projected/d8cb6939-c479-4114-8191-73a18f5f733a-kube-api-access-89sjq\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.991718 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-nb\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.991786 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-sb\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.991825 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-dns-svc\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.992666 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-config\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.992865 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-nb\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.992902 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-sb\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:22 crc kubenswrapper[4786]: I1209 09:04:22.992915 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-dns-svc\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:23 crc kubenswrapper[4786]: I1209 09:04:23.016922 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89sjq\" (UniqueName: \"kubernetes.io/projected/d8cb6939-c479-4114-8191-73a18f5f733a-kube-api-access-89sjq\") pod \"dnsmasq-dns-775897b69-78frg\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:23 crc kubenswrapper[4786]: I1209 09:04:23.175757 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:23 crc kubenswrapper[4786]: I1209 09:04:23.979991 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 09 09:04:23 crc kubenswrapper[4786]: I1209 09:04:23.990382 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 09:04:23 crc kubenswrapper[4786]: I1209 09:04:23.995068 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 09 09:04:23 crc kubenswrapper[4786]: I1209 09:04:23.995324 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 09 09:04:23 crc kubenswrapper[4786]: I1209 09:04:23.995456 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 09 09:04:23 crc kubenswrapper[4786]: I1209 09:04:23.995658 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-6rrvk" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.015455 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.108358 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.108405 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ed655e06-206a-407f-8651-56e042d74cd1-lock\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.108482 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p48p2\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-kube-api-access-p48p2\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.108542 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.108597 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ed655e06-206a-407f-8651-56e042d74cd1-cache\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.210990 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p48p2\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-kube-api-access-p48p2\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.211063 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.211133 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ed655e06-206a-407f-8651-56e042d74cd1-cache\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.211193 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.211217 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ed655e06-206a-407f-8651-56e042d74cd1-lock\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.211808 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/ed655e06-206a-407f-8651-56e042d74cd1-lock\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: E1209 09:04:24.212785 4786 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 09:04:24 crc kubenswrapper[4786]: E1209 09:04:24.212828 4786 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.212853 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: E1209 09:04:24.212941 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift podName:ed655e06-206a-407f-8651-56e042d74cd1 nodeName:}" failed. No retries permitted until 2025-12-09 09:04:24.71289942 +0000 UTC m=+1230.596520836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift") pod "swift-storage-0" (UID: "ed655e06-206a-407f-8651-56e042d74cd1") : configmap "swift-ring-files" not found Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.212978 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/ed655e06-206a-407f-8651-56e042d74cd1-cache\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.231312 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p48p2\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-kube-api-access-p48p2\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.251894 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.532543 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-ph8nr"] Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.533628 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.535923 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.535956 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.536335 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.549696 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ph8nr"] Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.724383 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-ring-data-devices\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.724516 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-dispersionconf\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.724542 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5dkd\" (UniqueName: \"kubernetes.io/projected/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-kube-api-access-w5dkd\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.724582 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.724615 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-combined-ca-bundle\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.724639 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-etc-swift\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.724677 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-swiftconf\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.724704 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-scripts\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: E1209 09:04:24.724874 4786 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 09:04:24 crc kubenswrapper[4786]: E1209 09:04:24.724889 4786 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 09:04:24 crc kubenswrapper[4786]: E1209 09:04:24.724930 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift podName:ed655e06-206a-407f-8651-56e042d74cd1 nodeName:}" failed. No retries permitted until 2025-12-09 09:04:25.724913855 +0000 UTC m=+1231.608535081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift") pod "swift-storage-0" (UID: "ed655e06-206a-407f-8651-56e042d74cd1") : configmap "swift-ring-files" not found Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.826633 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-combined-ca-bundle\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.826699 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-etc-swift\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.826744 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-swiftconf\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.826774 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-scripts\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.827641 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-etc-swift\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.827668 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-ring-data-devices\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.827934 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-scripts\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.827948 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-dispersionconf\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.828021 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5dkd\" (UniqueName: \"kubernetes.io/projected/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-kube-api-access-w5dkd\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.828115 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-ring-data-devices\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.834292 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-combined-ca-bundle\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.835796 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-dispersionconf\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.847794 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-swiftconf\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.863256 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5dkd\" (UniqueName: \"kubernetes.io/projected/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-kube-api-access-w5dkd\") pod \"swift-ring-rebalance-ph8nr\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.988492 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:04:24 crc kubenswrapper[4786]: I1209 09:04:24.988828 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:04:25 crc kubenswrapper[4786]: I1209 09:04:25.150333 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:25 crc kubenswrapper[4786]: I1209 09:04:25.741844 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:25 crc kubenswrapper[4786]: E1209 09:04:25.742038 4786 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 09:04:25 crc kubenswrapper[4786]: E1209 09:04:25.742069 4786 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 09:04:25 crc kubenswrapper[4786]: E1209 09:04:25.742165 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift podName:ed655e06-206a-407f-8651-56e042d74cd1 nodeName:}" failed. No retries permitted until 2025-12-09 09:04:27.742140148 +0000 UTC m=+1233.625761384 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift") pod "swift-storage-0" (UID: "ed655e06-206a-407f-8651-56e042d74cd1") : configmap "swift-ring-files" not found Dec 09 09:04:26 crc kubenswrapper[4786]: I1209 09:04:26.478016 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" podUID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Dec 09 09:04:27 crc kubenswrapper[4786]: I1209 09:04:27.804882 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:27 crc kubenswrapper[4786]: E1209 09:04:27.805483 4786 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 09:04:27 crc kubenswrapper[4786]: E1209 09:04:27.806325 4786 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 09:04:27 crc kubenswrapper[4786]: E1209 09:04:27.806497 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift podName:ed655e06-206a-407f-8651-56e042d74cd1 nodeName:}" failed. No retries permitted until 2025-12-09 09:04:31.806475252 +0000 UTC m=+1237.690096478 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift") pod "swift-storage-0" (UID: "ed655e06-206a-407f-8651-56e042d74cd1") : configmap "swift-ring-files" not found Dec 09 09:04:31 crc kubenswrapper[4786]: I1209 09:04:31.478314 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" podUID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.118:5353: connect: connection refused" Dec 09 09:04:31 crc kubenswrapper[4786]: I1209 09:04:31.880881 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:31 crc kubenswrapper[4786]: E1209 09:04:31.881100 4786 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 09:04:31 crc kubenswrapper[4786]: E1209 09:04:31.881119 4786 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 09:04:31 crc kubenswrapper[4786]: E1209 09:04:31.881168 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift podName:ed655e06-206a-407f-8651-56e042d74cd1 nodeName:}" failed. No retries permitted until 2025-12-09 09:04:39.881153056 +0000 UTC m=+1245.764774282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift") pod "swift-storage-0" (UID: "ed655e06-206a-407f-8651-56e042d74cd1") : configmap "swift-ring-files" not found Dec 09 09:04:32 crc kubenswrapper[4786]: E1209 09:04:32.746222 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b" Dec 09 09:04:32 crc kubenswrapper[4786]: E1209 09:04:32.747447 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:registry.redhat.io/cluster-observability-operator/prometheus-rhel9@sha256:17ea20be390a94ab39f5cdd7f0cbc2498046eebcf77fe3dec9aa288d5c2cf46b,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.enable-remote-write-receiver --web.route-prefix=/ --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:web,HostPort:0,ContainerPort:9090,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-metric-storage-rulefiles-0,ReadOnly:false,MountPath:/etc/prometheus/rules/prometheus-metric-storage-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p8vks,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/healthy,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/-/ready,Port:{1 0 web},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(6c7ace95-341e-4733-af87-8c256ae0d9e6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.154259 4786 generic.go:334] "Generic (PLEG): container finished" podID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerID="e60e06e578767a9c248401d2dfe33b99150ef19d7f3797287dd75a27e0ce3410" exitCode=0 Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.154311 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" event={"ID":"fd5cc458-6470-474a-b93a-0f2e7a193380","Type":"ContainerDied","Data":"e60e06e578767a9c248401d2dfe33b99150ef19d7f3797287dd75a27e0ce3410"} Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.243627 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-775897b69-78frg"] Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.250360 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-ph8nr"] Dec 09 09:04:33 crc kubenswrapper[4786]: W1209 09:04:33.312704 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8cb6939_c479_4114_8191_73a18f5f733a.slice/crio-655a668f0c6f33cc80dc67dabd87896abe456859cce7ecdf883b2a65c6dfb28a WatchSource:0}: Error finding container 655a668f0c6f33cc80dc67dabd87896abe456859cce7ecdf883b2a65c6dfb28a: Status 404 returned error can't find the container with id 655a668f0c6f33cc80dc67dabd87896abe456859cce7ecdf883b2a65c6dfb28a Dec 09 09:04:33 crc kubenswrapper[4786]: W1209 09:04:33.314879 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47aa56da_70ed_4ee4_a83c_35c8116a0ec3.slice/crio-1fae312e622d2fa43190db601a8968abe9a4439a6de070575182ad5262bdd248 WatchSource:0}: Error finding container 1fae312e622d2fa43190db601a8968abe9a4439a6de070575182ad5262bdd248: Status 404 returned error can't find the container with id 1fae312e622d2fa43190db601a8968abe9a4439a6de070575182ad5262bdd248 Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.423602 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.518289 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-dns-svc\") pod \"fd5cc458-6470-474a-b93a-0f2e7a193380\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.519008 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-config\") pod \"fd5cc458-6470-474a-b93a-0f2e7a193380\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.519125 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-ovsdbserver-sb\") pod \"fd5cc458-6470-474a-b93a-0f2e7a193380\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.519245 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpcqw\" (UniqueName: \"kubernetes.io/projected/fd5cc458-6470-474a-b93a-0f2e7a193380-kube-api-access-fpcqw\") pod \"fd5cc458-6470-474a-b93a-0f2e7a193380\" (UID: \"fd5cc458-6470-474a-b93a-0f2e7a193380\") " Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.548912 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5cc458-6470-474a-b93a-0f2e7a193380-kube-api-access-fpcqw" (OuterVolumeSpecName: "kube-api-access-fpcqw") pod "fd5cc458-6470-474a-b93a-0f2e7a193380" (UID: "fd5cc458-6470-474a-b93a-0f2e7a193380"). InnerVolumeSpecName "kube-api-access-fpcqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.621442 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpcqw\" (UniqueName: \"kubernetes.io/projected/fd5cc458-6470-474a-b93a-0f2e7a193380-kube-api-access-fpcqw\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.665341 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fd5cc458-6470-474a-b93a-0f2e7a193380" (UID: "fd5cc458-6470-474a-b93a-0f2e7a193380"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:33 crc kubenswrapper[4786]: I1209 09:04:33.732261 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.087647 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fd5cc458-6470-474a-b93a-0f2e7a193380" (UID: "fd5cc458-6470-474a-b93a-0f2e7a193380"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.088628 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-config" (OuterVolumeSpecName: "config") pod "fd5cc458-6470-474a-b93a-0f2e7a193380" (UID: "fd5cc458-6470-474a-b93a-0f2e7a193380"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.140000 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.140033 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fd5cc458-6470-474a-b93a-0f2e7a193380-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.186205 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.186192 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66bd9c5465-lmxcr" event={"ID":"fd5cc458-6470-474a-b93a-0f2e7a193380","Type":"ContainerDied","Data":"c7ea330615ccb34e24a6268b2ee0a782e054f9de20feaadced2f252e87bc54a3"} Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.186284 4786 scope.go:117] "RemoveContainer" containerID="e60e06e578767a9c248401d2dfe33b99150ef19d7f3797287dd75a27e0ce3410" Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.207750 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775897b69-78frg" event={"ID":"d8cb6939-c479-4114-8191-73a18f5f733a","Type":"ContainerStarted","Data":"655a668f0c6f33cc80dc67dabd87896abe456859cce7ecdf883b2a65c6dfb28a"} Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.215758 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ph8nr" event={"ID":"47aa56da-70ed-4ee4-a83c-35c8116a0ec3","Type":"ContainerStarted","Data":"1fae312e622d2fa43190db601a8968abe9a4439a6de070575182ad5262bdd248"} Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.274176 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66bd9c5465-lmxcr"] Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.281781 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66bd9c5465-lmxcr"] Dec 09 09:04:34 crc kubenswrapper[4786]: I1209 09:04:34.972943 4786 scope.go:117] "RemoveContainer" containerID="e5b4749f98a29b2d608c976b30d7541bf24bed4b9e1b050646d5d8fe56686156" Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.210781 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5cc458-6470-474a-b93a-0f2e7a193380" path="/var/lib/kubelet/pods/fd5cc458-6470-474a-b93a-0f2e7a193380/volumes" Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.243724 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"8066cc20-76cd-4a47-a662-fb77cd5cbe3b","Type":"ContainerStarted","Data":"1515f1f3c49c0b4e6c63549508bdc1927e5cec459918c168e264eb81781454d2"} Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.247947 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f","Type":"ContainerStarted","Data":"8b915e9a5b73fd29ba68488529188183fd5c6c1f8dd37d261b0264319f55f4e5"} Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.271014 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22607f78-204a-4fb8-82d6-53d3f878a984","Type":"ContainerStarted","Data":"e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3"} Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.272265 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.275645 4786 generic.go:334] "Generic (PLEG): container finished" podID="d8cb6939-c479-4114-8191-73a18f5f733a" containerID="327e0cc65b68f51849a67437515017e4928964338e1b7d8a12c90a865d56d73b" exitCode=0 Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.275728 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775897b69-78frg" event={"ID":"d8cb6939-c479-4114-8191-73a18f5f733a","Type":"ContainerDied","Data":"327e0cc65b68f51849a67437515017e4928964338e1b7d8a12c90a865d56d73b"} Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.278604 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"673b3525-c496-4268-b9f9-c37f5175efdc","Type":"ContainerStarted","Data":"afd339a4450c37251636ced237a0d35c364490deedc2595d38c46fd850d347a3"} Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.280217 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=28.592592403 podStartE2EDuration="1m37.280196987s" podCreationTimestamp="2025-12-09 09:02:58 +0000 UTC" firstStartedPulling="2025-12-09 09:03:01.216735189 +0000 UTC m=+1147.100356425" lastFinishedPulling="2025-12-09 09:04:09.904339763 +0000 UTC m=+1215.787961009" observedRunningTime="2025-12-09 09:04:35.272482087 +0000 UTC m=+1241.156103323" watchObservedRunningTime="2025-12-09 09:04:35.280196987 +0000 UTC m=+1241.163818213" Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.294342 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.337210865 podStartE2EDuration="1m33.294313426s" podCreationTimestamp="2025-12-09 09:03:02 +0000 UTC" firstStartedPulling="2025-12-09 09:03:04.40787521 +0000 UTC m=+1150.291496436" lastFinishedPulling="2025-12-09 09:04:33.364977771 +0000 UTC m=+1239.248598997" observedRunningTime="2025-12-09 09:04:35.289542089 +0000 UTC m=+1241.173163335" watchObservedRunningTime="2025-12-09 09:04:35.294313426 +0000 UTC m=+1241.177934652" Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.355038 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=5.580442717 podStartE2EDuration="1m28.355019499s" podCreationTimestamp="2025-12-09 09:03:07 +0000 UTC" firstStartedPulling="2025-12-09 09:03:10.60821343 +0000 UTC m=+1156.491834656" lastFinishedPulling="2025-12-09 09:04:33.382790212 +0000 UTC m=+1239.266411438" observedRunningTime="2025-12-09 09:04:35.351039 +0000 UTC m=+1241.234660226" watchObservedRunningTime="2025-12-09 09:04:35.355019499 +0000 UTC m=+1241.238640725" Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.400784 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371938.454023 podStartE2EDuration="1m38.400751879s" podCreationTimestamp="2025-12-09 09:02:57 +0000 UTC" firstStartedPulling="2025-12-09 09:03:00.359614749 +0000 UTC m=+1146.243235975" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:04:35.384702773 +0000 UTC m=+1241.268323999" watchObservedRunningTime="2025-12-09 09:04:35.400751879 +0000 UTC m=+1241.284373115" Dec 09 09:04:35 crc kubenswrapper[4786]: I1209 09:04:35.576370 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 09 09:04:36 crc kubenswrapper[4786]: I1209 09:04:36.290761 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerStarted","Data":"28d67d7a43536dae7361dcd0c49349bdda9667f27b048b12ba5b6e34640537de"} Dec 09 09:04:37 crc kubenswrapper[4786]: I1209 09:04:37.303981 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775897b69-78frg" event={"ID":"d8cb6939-c479-4114-8191-73a18f5f733a","Type":"ContainerStarted","Data":"e26fbebb10a95541bdbab40554e40b8d675a5da9c0225ad41a987510a846080d"} Dec 09 09:04:37 crc kubenswrapper[4786]: I1209 09:04:37.304523 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:37 crc kubenswrapper[4786]: I1209 09:04:37.306703 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ph8nr" event={"ID":"47aa56da-70ed-4ee4-a83c-35c8116a0ec3","Type":"ContainerStarted","Data":"5db3aa08a00c86907de91eb37c09a4708252194d5d243a74c856a5e5649c2ed2"} Dec 09 09:04:37 crc kubenswrapper[4786]: I1209 09:04:37.334790 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-775897b69-78frg" podStartSLOduration=15.334769181 podStartE2EDuration="15.334769181s" podCreationTimestamp="2025-12-09 09:04:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:04:37.324849985 +0000 UTC m=+1243.208471221" watchObservedRunningTime="2025-12-09 09:04:37.334769181 +0000 UTC m=+1243.218390407" Dec 09 09:04:37 crc kubenswrapper[4786]: I1209 09:04:37.352763 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-ph8nr" podStartSLOduration=10.228427929 podStartE2EDuration="13.352743895s" podCreationTimestamp="2025-12-09 09:04:24 +0000 UTC" firstStartedPulling="2025-12-09 09:04:33.357771203 +0000 UTC m=+1239.241392429" lastFinishedPulling="2025-12-09 09:04:36.482087169 +0000 UTC m=+1242.365708395" observedRunningTime="2025-12-09 09:04:37.342975943 +0000 UTC m=+1243.226597199" watchObservedRunningTime="2025-12-09 09:04:37.352743895 +0000 UTC m=+1243.236365121" Dec 09 09:04:38 crc kubenswrapper[4786]: I1209 09:04:37.987026 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:04:38 crc kubenswrapper[4786]: I1209 09:04:38.031133 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-lqv95" Dec 09 09:04:38 crc kubenswrapper[4786]: I1209 09:04:38.316041 4786 generic.go:334] "Generic (PLEG): container finished" podID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerID="0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0" exitCode=0 Dec 09 09:04:38 crc kubenswrapper[4786]: I1209 09:04:38.316086 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59","Type":"ContainerDied","Data":"0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0"} Dec 09 09:04:38 crc kubenswrapper[4786]: I1209 09:04:38.576290 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 09 09:04:38 crc kubenswrapper[4786]: I1209 09:04:38.611749 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 09 09:04:38 crc kubenswrapper[4786]: I1209 09:04:38.823036 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 09 09:04:38 crc kubenswrapper[4786]: I1209 09:04:38.823114 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.427906 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.612637 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 09 09:04:39 crc kubenswrapper[4786]: E1209 09:04:39.613005 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerName="dnsmasq-dns" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.613025 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerName="dnsmasq-dns" Dec 09 09:04:39 crc kubenswrapper[4786]: E1209 09:04:39.613038 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerName="init" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.613044 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerName="init" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.613231 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5cc458-6470-474a-b93a-0f2e7a193380" containerName="dnsmasq-dns" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.614162 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.616592 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.616824 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.617030 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-nxtc4" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.618066 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.640404 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.664297 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-vv7k4-config-b9tnx"] Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.697360 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.699887 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vv7k4-config-b9tnx"] Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.701209 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.716793 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79a7868-1d59-4d6f-ac51-528c634a9b4f-scripts\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.716887 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnpbw\" (UniqueName: \"kubernetes.io/projected/f79a7868-1d59-4d6f-ac51-528c634a9b4f-kube-api-access-gnpbw\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.716921 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79a7868-1d59-4d6f-ac51-528c634a9b4f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.717034 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f79a7868-1d59-4d6f-ac51-528c634a9b4f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.717105 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79a7868-1d59-4d6f-ac51-528c634a9b4f-config\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.717163 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79a7868-1d59-4d6f-ac51-528c634a9b4f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.717207 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79a7868-1d59-4d6f-ac51-528c634a9b4f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818259 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run-ovn\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818325 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s76b\" (UniqueName: \"kubernetes.io/projected/04fcfde6-82b8-4e94-9eb2-1e730a774382-kube-api-access-5s76b\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818366 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f79a7868-1d59-4d6f-ac51-528c634a9b4f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818408 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-log-ovn\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818479 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79a7868-1d59-4d6f-ac51-528c634a9b4f-config\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818500 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818555 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-additional-scripts\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818584 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79a7868-1d59-4d6f-ac51-528c634a9b4f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818632 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79a7868-1d59-4d6f-ac51-528c634a9b4f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818661 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79a7868-1d59-4d6f-ac51-528c634a9b4f-scripts\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818699 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-scripts\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818728 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnpbw\" (UniqueName: \"kubernetes.io/projected/f79a7868-1d59-4d6f-ac51-528c634a9b4f-kube-api-access-gnpbw\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818759 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79a7868-1d59-4d6f-ac51-528c634a9b4f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.818827 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/f79a7868-1d59-4d6f-ac51-528c634a9b4f-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.820223 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f79a7868-1d59-4d6f-ac51-528c634a9b4f-config\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.820344 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f79a7868-1d59-4d6f-ac51-528c634a9b4f-scripts\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.823262 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79a7868-1d59-4d6f-ac51-528c634a9b4f-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.823848 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f79a7868-1d59-4d6f-ac51-528c634a9b4f-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.824702 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f79a7868-1d59-4d6f-ac51-528c634a9b4f-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.840804 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnpbw\" (UniqueName: \"kubernetes.io/projected/f79a7868-1d59-4d6f-ac51-528c634a9b4f-kube-api-access-gnpbw\") pod \"ovn-northd-0\" (UID: \"f79a7868-1d59-4d6f-ac51-528c634a9b4f\") " pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.920962 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-additional-scripts\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.921069 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-scripts\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.921120 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run-ovn\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.921144 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s76b\" (UniqueName: \"kubernetes.io/projected/04fcfde6-82b8-4e94-9eb2-1e730a774382-kube-api-access-5s76b\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.921182 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.921207 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-log-ovn\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.921246 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.921453 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run-ovn\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.921456 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.921619 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-log-ovn\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: E1209 09:04:39.921824 4786 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 09 09:04:39 crc kubenswrapper[4786]: E1209 09:04:39.921916 4786 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 09 09:04:39 crc kubenswrapper[4786]: E1209 09:04:39.922056 4786 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift podName:ed655e06-206a-407f-8651-56e042d74cd1 nodeName:}" failed. No retries permitted until 2025-12-09 09:04:55.922031551 +0000 UTC m=+1261.805652777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift") pod "swift-storage-0" (UID: "ed655e06-206a-407f-8651-56e042d74cd1") : configmap "swift-ring-files" not found Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.922123 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-additional-scripts\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.924073 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-scripts\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.935611 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.935717 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.955390 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 09 09:04:39 crc kubenswrapper[4786]: I1209 09:04:39.962969 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s76b\" (UniqueName: \"kubernetes.io/projected/04fcfde6-82b8-4e94-9eb2-1e730a774382-kube-api-access-5s76b\") pod \"ovn-controller-vv7k4-config-b9tnx\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:40 crc kubenswrapper[4786]: I1209 09:04:40.029469 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:40 crc kubenswrapper[4786]: I1209 09:04:40.352050 4786 generic.go:334] "Generic (PLEG): container finished" podID="6c5ae8f0-bfa8-4fe2-81c3-289021674179" containerID="4b7b81266399ea17629bd265894be29c468feb5d1b87d3c08d064b32abc1e0c8" exitCode=0 Dec 09 09:04:40 crc kubenswrapper[4786]: I1209 09:04:40.353091 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"6c5ae8f0-bfa8-4fe2-81c3-289021674179","Type":"ContainerDied","Data":"4b7b81266399ea17629bd265894be29c468feb5d1b87d3c08d064b32abc1e0c8"} Dec 09 09:04:40 crc kubenswrapper[4786]: I1209 09:04:40.358665 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59","Type":"ContainerStarted","Data":"5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d"} Dec 09 09:04:40 crc kubenswrapper[4786]: I1209 09:04:40.359997 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 09:04:40 crc kubenswrapper[4786]: I1209 09:04:40.398604 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 09 09:04:40 crc kubenswrapper[4786]: I1209 09:04:40.780747 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=42.053557672 podStartE2EDuration="1m46.780719192s" podCreationTimestamp="2025-12-09 09:02:54 +0000 UTC" firstStartedPulling="2025-12-09 09:02:57.809290296 +0000 UTC m=+1143.692911562" lastFinishedPulling="2025-12-09 09:04:02.536451846 +0000 UTC m=+1208.420073082" observedRunningTime="2025-12-09 09:04:40.43015878 +0000 UTC m=+1246.313780006" watchObservedRunningTime="2025-12-09 09:04:40.780719192 +0000 UTC m=+1246.664340418" Dec 09 09:04:40 crc kubenswrapper[4786]: I1209 09:04:40.788264 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-vv7k4-config-b9tnx"] Dec 09 09:04:41 crc kubenswrapper[4786]: I1209 09:04:41.285122 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 09 09:04:41 crc kubenswrapper[4786]: I1209 09:04:41.426798 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 09 09:04:42 crc kubenswrapper[4786]: I1209 09:04:42.390241 4786 generic.go:334] "Generic (PLEG): container finished" podID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerID="707b5d59710fe68771b66a3ff78dabeef8338b89f750f1e932482d67d5771632" exitCode=0 Dec 09 09:04:42 crc kubenswrapper[4786]: I1209 09:04:42.390627 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01efacfa-e002-4e0d-aa6b-91217baa22ca","Type":"ContainerDied","Data":"707b5d59710fe68771b66a3ff78dabeef8338b89f750f1e932482d67d5771632"} Dec 09 09:04:42 crc kubenswrapper[4786]: I1209 09:04:42.399842 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f79a7868-1d59-4d6f-ac51-528c634a9b4f","Type":"ContainerStarted","Data":"2048f0f891ba86f1239ee7ebcd4ed9bbbaebbb443598237a854c0580f5115654"} Dec 09 09:04:42 crc kubenswrapper[4786]: I1209 09:04:42.406456 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vv7k4-config-b9tnx" event={"ID":"04fcfde6-82b8-4e94-9eb2-1e730a774382","Type":"ContainerStarted","Data":"19eb7d50b4d7fafedc19597ad7db416534a8a7e5b24bb2749da5ce5dc13610c5"} Dec 09 09:04:42 crc kubenswrapper[4786]: I1209 09:04:42.715542 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-vv7k4" podUID="7cad500b-e392-4774-a524-02587da67379" containerName="ovn-controller" probeResult="failure" output=< Dec 09 09:04:42 crc kubenswrapper[4786]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 09 09:04:42 crc kubenswrapper[4786]: > Dec 09 09:04:42 crc kubenswrapper[4786]: E1209 09:04:42.774132 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.046966 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.179963 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.217318 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.333208 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cdb9ff747-rqmsj"] Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.333647 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" podUID="6b6dc383-3ee2-4b70-b43c-7b862f235163" containerName="dnsmasq-dns" containerID="cri-o://cef57bdf5f7f6e1f86768251f3f0ba436f25d7fca6658345981f3623f162af8d" gracePeriod=10 Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.615632 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.634708 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"6c5ae8f0-bfa8-4fe2-81c3-289021674179","Type":"ContainerStarted","Data":"e868868e27c22d558e81a4153e458990528dc5cfed65612797563d92628a6784"} Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.635896 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.639448 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vv7k4-config-b9tnx" event={"ID":"04fcfde6-82b8-4e94-9eb2-1e730a774382","Type":"ContainerStarted","Data":"7a33c297eb318071eec7c91c8d62a9d40ac60dbb6223966d28a5508d876c643d"} Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.661020 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerStarted","Data":"1beada5bd002c662c115458b43b270b4583f2c75223b3101c5a07fd95d24a37c"} Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.677260 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01efacfa-e002-4e0d-aa6b-91217baa22ca","Type":"ContainerStarted","Data":"4366b1674ffd020712dba1a21ba84648563b9bbd92a30327e54ce48d6a0fef93"} Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.678452 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.681309 4786 generic.go:334] "Generic (PLEG): container finished" podID="6b6dc383-3ee2-4b70-b43c-7b862f235163" containerID="cef57bdf5f7f6e1f86768251f3f0ba436f25d7fca6658345981f3623f162af8d" exitCode=0 Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.681646 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" event={"ID":"6b6dc383-3ee2-4b70-b43c-7b862f235163","Type":"ContainerDied","Data":"cef57bdf5f7f6e1f86768251f3f0ba436f25d7fca6658345981f3623f162af8d"} Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.728023 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=43.588164199 podStartE2EDuration="1m48.727994138s" podCreationTimestamp="2025-12-09 09:02:55 +0000 UTC" firstStartedPulling="2025-12-09 09:02:59.175634005 +0000 UTC m=+1145.059255231" lastFinishedPulling="2025-12-09 09:04:04.315463934 +0000 UTC m=+1210.199085170" observedRunningTime="2025-12-09 09:04:43.709182352 +0000 UTC m=+1249.592803608" watchObservedRunningTime="2025-12-09 09:04:43.727994138 +0000 UTC m=+1249.611615364" Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.760839 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-vv7k4-config-b9tnx" podStartSLOduration=4.760801189 podStartE2EDuration="4.760801189s" podCreationTimestamp="2025-12-09 09:04:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:04:43.737932374 +0000 UTC m=+1249.621553610" watchObservedRunningTime="2025-12-09 09:04:43.760801189 +0000 UTC m=+1249.644422415" Dec 09 09:04:43 crc kubenswrapper[4786]: I1209 09:04:43.827996 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=-9223371928.026802 podStartE2EDuration="1m48.827974621s" podCreationTimestamp="2025-12-09 09:02:55 +0000 UTC" firstStartedPulling="2025-12-09 09:02:58.317140562 +0000 UTC m=+1144.200761788" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:04:43.822040755 +0000 UTC m=+1249.705661981" watchObservedRunningTime="2025-12-09 09:04:43.827974621 +0000 UTC m=+1249.711595847" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.234126 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.334016 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-dns-svc\") pod \"6b6dc383-3ee2-4b70-b43c-7b862f235163\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.334108 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-config\") pod \"6b6dc383-3ee2-4b70-b43c-7b862f235163\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.334220 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccp9q\" (UniqueName: \"kubernetes.io/projected/6b6dc383-3ee2-4b70-b43c-7b862f235163-kube-api-access-ccp9q\") pod \"6b6dc383-3ee2-4b70-b43c-7b862f235163\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.334284 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-sb\") pod \"6b6dc383-3ee2-4b70-b43c-7b862f235163\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.334342 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-nb\") pod \"6b6dc383-3ee2-4b70-b43c-7b862f235163\" (UID: \"6b6dc383-3ee2-4b70-b43c-7b862f235163\") " Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.349563 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6dc383-3ee2-4b70-b43c-7b862f235163-kube-api-access-ccp9q" (OuterVolumeSpecName: "kube-api-access-ccp9q") pod "6b6dc383-3ee2-4b70-b43c-7b862f235163" (UID: "6b6dc383-3ee2-4b70-b43c-7b862f235163"). InnerVolumeSpecName "kube-api-access-ccp9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.393533 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-config" (OuterVolumeSpecName: "config") pod "6b6dc383-3ee2-4b70-b43c-7b862f235163" (UID: "6b6dc383-3ee2-4b70-b43c-7b862f235163"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.397296 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6b6dc383-3ee2-4b70-b43c-7b862f235163" (UID: "6b6dc383-3ee2-4b70-b43c-7b862f235163"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.423249 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6b6dc383-3ee2-4b70-b43c-7b862f235163" (UID: "6b6dc383-3ee2-4b70-b43c-7b862f235163"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.431285 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6b6dc383-3ee2-4b70-b43c-7b862f235163" (UID: "6b6dc383-3ee2-4b70-b43c-7b862f235163"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.437256 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.437302 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.437317 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccp9q\" (UniqueName: \"kubernetes.io/projected/6b6dc383-3ee2-4b70-b43c-7b862f235163-kube-api-access-ccp9q\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.437333 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.437343 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6b6dc383-3ee2-4b70-b43c-7b862f235163-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.707400 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f79a7868-1d59-4d6f-ac51-528c634a9b4f","Type":"ContainerStarted","Data":"4c607a7e5dd751229460dc4ebb46c995e1238a53aa93ee48685a11916cedb674"} Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.710240 4786 generic.go:334] "Generic (PLEG): container finished" podID="04fcfde6-82b8-4e94-9eb2-1e730a774382" containerID="7a33c297eb318071eec7c91c8d62a9d40ac60dbb6223966d28a5508d876c643d" exitCode=0 Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.710311 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vv7k4-config-b9tnx" event={"ID":"04fcfde6-82b8-4e94-9eb2-1e730a774382","Type":"ContainerDied","Data":"7a33c297eb318071eec7c91c8d62a9d40ac60dbb6223966d28a5508d876c643d"} Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.723500 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.723972 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cdb9ff747-rqmsj" event={"ID":"6b6dc383-3ee2-4b70-b43c-7b862f235163","Type":"ContainerDied","Data":"e68d0ea2ae657cc107b35c742332a0a5bca4233da1240ba900d20c5d7788315b"} Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.724034 4786 scope.go:117] "RemoveContainer" containerID="cef57bdf5f7f6e1f86768251f3f0ba436f25d7fca6658345981f3623f162af8d" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.752887 4786 scope.go:117] "RemoveContainer" containerID="553f3cf085c45d3fb7f0468456139608d403a6e17604320137458ba8e39ceeba" Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.772615 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cdb9ff747-rqmsj"] Dec 09 09:04:44 crc kubenswrapper[4786]: I1209 09:04:44.789991 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cdb9ff747-rqmsj"] Dec 09 09:04:45 crc kubenswrapper[4786]: I1209 09:04:45.198632 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6dc383-3ee2-4b70-b43c-7b862f235163" path="/var/lib/kubelet/pods/6b6dc383-3ee2-4b70-b43c-7b862f235163/volumes" Dec 09 09:04:45 crc kubenswrapper[4786]: I1209 09:04:45.733921 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"f79a7868-1d59-4d6f-ac51-528c634a9b4f","Type":"ContainerStarted","Data":"59ebaff6cc876c2d9f2881ae855f0384463577f17ce58d4c240944a2a5752a4a"} Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.484005 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.598579 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-log-ovn\") pod \"04fcfde6-82b8-4e94-9eb2-1e730a774382\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.598698 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5s76b\" (UniqueName: \"kubernetes.io/projected/04fcfde6-82b8-4e94-9eb2-1e730a774382-kube-api-access-5s76b\") pod \"04fcfde6-82b8-4e94-9eb2-1e730a774382\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.598717 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "04fcfde6-82b8-4e94-9eb2-1e730a774382" (UID: "04fcfde6-82b8-4e94-9eb2-1e730a774382"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.598753 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-scripts\") pod \"04fcfde6-82b8-4e94-9eb2-1e730a774382\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.599134 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run-ovn\") pod \"04fcfde6-82b8-4e94-9eb2-1e730a774382\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.599285 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-additional-scripts\") pod \"04fcfde6-82b8-4e94-9eb2-1e730a774382\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.599332 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run\") pod \"04fcfde6-82b8-4e94-9eb2-1e730a774382\" (UID: \"04fcfde6-82b8-4e94-9eb2-1e730a774382\") " Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.599581 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "04fcfde6-82b8-4e94-9eb2-1e730a774382" (UID: "04fcfde6-82b8-4e94-9eb2-1e730a774382"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.600022 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-scripts" (OuterVolumeSpecName: "scripts") pod "04fcfde6-82b8-4e94-9eb2-1e730a774382" (UID: "04fcfde6-82b8-4e94-9eb2-1e730a774382"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.600074 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run" (OuterVolumeSpecName: "var-run") pod "04fcfde6-82b8-4e94-9eb2-1e730a774382" (UID: "04fcfde6-82b8-4e94-9eb2-1e730a774382"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.600030 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "04fcfde6-82b8-4e94-9eb2-1e730a774382" (UID: "04fcfde6-82b8-4e94-9eb2-1e730a774382"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.600318 4786 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.600348 4786 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.600366 4786 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.600387 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/04fcfde6-82b8-4e94-9eb2-1e730a774382-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.600399 4786 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/04fcfde6-82b8-4e94-9eb2-1e730a774382-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.603071 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04fcfde6-82b8-4e94-9eb2-1e730a774382-kube-api-access-5s76b" (OuterVolumeSpecName: "kube-api-access-5s76b") pod "04fcfde6-82b8-4e94-9eb2-1e730a774382" (UID: "04fcfde6-82b8-4e94-9eb2-1e730a774382"). InnerVolumeSpecName "kube-api-access-5s76b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.702906 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5s76b\" (UniqueName: \"kubernetes.io/projected/04fcfde6-82b8-4e94-9eb2-1e730a774382-kube-api-access-5s76b\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.744812 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-vv7k4-config-b9tnx" event={"ID":"04fcfde6-82b8-4e94-9eb2-1e730a774382","Type":"ContainerDied","Data":"19eb7d50b4d7fafedc19597ad7db416534a8a7e5b24bb2749da5ce5dc13610c5"} Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.744848 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-vv7k4-config-b9tnx" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.744870 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19eb7d50b4d7fafedc19597ad7db416534a8a7e5b24bb2749da5ce5dc13610c5" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.747746 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerStarted","Data":"35269a2b95415aa9bd906f17685722e9132b9949ef9b0a71eba0e666d2b32040"} Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.747992 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.779098 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=5.873509634 podStartE2EDuration="7.779071922s" podCreationTimestamp="2025-12-09 09:04:39 +0000 UTC" firstStartedPulling="2025-12-09 09:04:42.085536659 +0000 UTC m=+1247.969157885" lastFinishedPulling="2025-12-09 09:04:43.991098947 +0000 UTC m=+1249.874720173" observedRunningTime="2025-12-09 09:04:46.771252358 +0000 UTC m=+1252.654873604" watchObservedRunningTime="2025-12-09 09:04:46.779071922 +0000 UTC m=+1252.662693148" Dec 09 09:04:46 crc kubenswrapper[4786]: I1209 09:04:46.813773 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=4.492363121 podStartE2EDuration="1m43.8137465s" podCreationTimestamp="2025-12-09 09:03:03 +0000 UTC" firstStartedPulling="2025-12-09 09:03:06.978726305 +0000 UTC m=+1152.862347521" lastFinishedPulling="2025-12-09 09:04:46.300109674 +0000 UTC m=+1252.183730900" observedRunningTime="2025-12-09 09:04:46.807774401 +0000 UTC m=+1252.691395627" watchObservedRunningTime="2025-12-09 09:04:46.8137465 +0000 UTC m=+1252.697367736" Dec 09 09:04:47 crc kubenswrapper[4786]: I1209 09:04:47.637978 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-vv7k4-config-b9tnx"] Dec 09 09:04:47 crc kubenswrapper[4786]: I1209 09:04:47.654300 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-vv7k4-config-b9tnx"] Dec 09 09:04:47 crc kubenswrapper[4786]: I1209 09:04:47.693027 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-vv7k4" Dec 09 09:04:49 crc kubenswrapper[4786]: I1209 09:04:49.199889 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04fcfde6-82b8-4e94-9eb2-1e730a774382" path="/var/lib/kubelet/pods/04fcfde6-82b8-4e94-9eb2-1e730a774382/volumes" Dec 09 09:04:49 crc kubenswrapper[4786]: I1209 09:04:49.457282 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:49 crc kubenswrapper[4786]: I1209 09:04:49.457342 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:49 crc kubenswrapper[4786]: I1209 09:04:49.462528 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:49 crc kubenswrapper[4786]: I1209 09:04:49.772946 4786 generic.go:334] "Generic (PLEG): container finished" podID="47aa56da-70ed-4ee4-a83c-35c8116a0ec3" containerID="5db3aa08a00c86907de91eb37c09a4708252194d5d243a74c856a5e5649c2ed2" exitCode=0 Dec 09 09:04:49 crc kubenswrapper[4786]: I1209 09:04:49.773043 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ph8nr" event={"ID":"47aa56da-70ed-4ee4-a83c-35c8116a0ec3","Type":"ContainerDied","Data":"5db3aa08a00c86907de91eb37c09a4708252194d5d243a74c856a5e5649c2ed2"} Dec 09 09:04:49 crc kubenswrapper[4786]: I1209 09:04:49.774400 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.244974 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ncqnq"] Dec 09 09:04:50 crc kubenswrapper[4786]: E1209 09:04:50.245474 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04fcfde6-82b8-4e94-9eb2-1e730a774382" containerName="ovn-config" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.245493 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="04fcfde6-82b8-4e94-9eb2-1e730a774382" containerName="ovn-config" Dec 09 09:04:50 crc kubenswrapper[4786]: E1209 09:04:50.245508 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6dc383-3ee2-4b70-b43c-7b862f235163" containerName="init" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.245516 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6dc383-3ee2-4b70-b43c-7b862f235163" containerName="init" Dec 09 09:04:50 crc kubenswrapper[4786]: E1209 09:04:50.245535 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6dc383-3ee2-4b70-b43c-7b862f235163" containerName="dnsmasq-dns" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.245543 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6dc383-3ee2-4b70-b43c-7b862f235163" containerName="dnsmasq-dns" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.245896 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6dc383-3ee2-4b70-b43c-7b862f235163" containerName="dnsmasq-dns" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.245960 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="04fcfde6-82b8-4e94-9eb2-1e730a774382" containerName="ovn-config" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.246887 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ncqnq" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.265456 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ncqnq"] Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.394855 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229mk\" (UniqueName: \"kubernetes.io/projected/3d297c4a-7ce4-4c61-9e72-d4503808c184-kube-api-access-229mk\") pod \"keystone-db-create-ncqnq\" (UID: \"3d297c4a-7ce4-4c61-9e72-d4503808c184\") " pod="openstack/keystone-db-create-ncqnq" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.488868 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-w76vw"] Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.491935 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w76vw" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.497305 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229mk\" (UniqueName: \"kubernetes.io/projected/3d297c4a-7ce4-4c61-9e72-d4503808c184-kube-api-access-229mk\") pod \"keystone-db-create-ncqnq\" (UID: \"3d297c4a-7ce4-4c61-9e72-d4503808c184\") " pod="openstack/keystone-db-create-ncqnq" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.499258 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-w76vw"] Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.517189 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229mk\" (UniqueName: \"kubernetes.io/projected/3d297c4a-7ce4-4c61-9e72-d4503808c184-kube-api-access-229mk\") pod \"keystone-db-create-ncqnq\" (UID: \"3d297c4a-7ce4-4c61-9e72-d4503808c184\") " pod="openstack/keystone-db-create-ncqnq" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.569297 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ncqnq" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.598676 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfv4k\" (UniqueName: \"kubernetes.io/projected/fdb0b7af-fae9-45a5-924b-a3507c046c28-kube-api-access-zfv4k\") pod \"placement-db-create-w76vw\" (UID: \"fdb0b7af-fae9-45a5-924b-a3507c046c28\") " pod="openstack/placement-db-create-w76vw" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.704777 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfv4k\" (UniqueName: \"kubernetes.io/projected/fdb0b7af-fae9-45a5-924b-a3507c046c28-kube-api-access-zfv4k\") pod \"placement-db-create-w76vw\" (UID: \"fdb0b7af-fae9-45a5-924b-a3507c046c28\") " pod="openstack/placement-db-create-w76vw" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.755807 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfv4k\" (UniqueName: \"kubernetes.io/projected/fdb0b7af-fae9-45a5-924b-a3507c046c28-kube-api-access-zfv4k\") pod \"placement-db-create-w76vw\" (UID: \"fdb0b7af-fae9-45a5-924b-a3507c046c28\") " pod="openstack/placement-db-create-w76vw" Dec 09 09:04:50 crc kubenswrapper[4786]: I1209 09:04:50.805588 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w76vw" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.099963 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ncqnq"] Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.249016 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-w76vw"] Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.439414 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.525585 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-etc-swift\") pod \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.525707 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-swiftconf\") pod \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.525824 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5dkd\" (UniqueName: \"kubernetes.io/projected/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-kube-api-access-w5dkd\") pod \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.525880 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-combined-ca-bundle\") pod \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.556714 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-scripts\") pod \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.556881 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-dispersionconf\") pod \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.556927 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-ring-data-devices\") pod \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\" (UID: \"47aa56da-70ed-4ee4-a83c-35c8116a0ec3\") " Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.561399 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "47aa56da-70ed-4ee4-a83c-35c8116a0ec3" (UID: "47aa56da-70ed-4ee4-a83c-35c8116a0ec3"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.570520 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-kube-api-access-w5dkd" (OuterVolumeSpecName: "kube-api-access-w5dkd") pod "47aa56da-70ed-4ee4-a83c-35c8116a0ec3" (UID: "47aa56da-70ed-4ee4-a83c-35c8116a0ec3"). InnerVolumeSpecName "kube-api-access-w5dkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.571085 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "47aa56da-70ed-4ee4-a83c-35c8116a0ec3" (UID: "47aa56da-70ed-4ee4-a83c-35c8116a0ec3"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.583768 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "47aa56da-70ed-4ee4-a83c-35c8116a0ec3" (UID: "47aa56da-70ed-4ee4-a83c-35c8116a0ec3"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.600127 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-scripts" (OuterVolumeSpecName: "scripts") pod "47aa56da-70ed-4ee4-a83c-35c8116a0ec3" (UID: "47aa56da-70ed-4ee4-a83c-35c8116a0ec3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.635686 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "47aa56da-70ed-4ee4-a83c-35c8116a0ec3" (UID: "47aa56da-70ed-4ee4-a83c-35c8116a0ec3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.639652 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "47aa56da-70ed-4ee4-a83c-35c8116a0ec3" (UID: "47aa56da-70ed-4ee4-a83c-35c8116a0ec3"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.662908 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.662944 4786 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.662956 4786 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.662966 4786 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.662975 4786 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.662985 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5dkd\" (UniqueName: \"kubernetes.io/projected/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-kube-api-access-w5dkd\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.662996 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47aa56da-70ed-4ee4-a83c-35c8116a0ec3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.808492 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-ph8nr" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.808435 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-ph8nr" event={"ID":"47aa56da-70ed-4ee4-a83c-35c8116a0ec3","Type":"ContainerDied","Data":"1fae312e622d2fa43190db601a8968abe9a4439a6de070575182ad5262bdd248"} Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.808710 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1fae312e622d2fa43190db601a8968abe9a4439a6de070575182ad5262bdd248" Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.809609 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-w76vw" event={"ID":"fdb0b7af-fae9-45a5-924b-a3507c046c28","Type":"ContainerStarted","Data":"fe32d8889b5ded4886eed4d42c078bf23a659e61121573ad72e6fedfbb356f7f"} Dec 09 09:04:51 crc kubenswrapper[4786]: I1209 09:04:51.810661 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ncqnq" event={"ID":"3d297c4a-7ce4-4c61-9e72-d4503808c184","Type":"ContainerStarted","Data":"cc9a8e37c6c6c4185fd64fbc0e558752ef60ceedb83d31b3e368d56bf6be4286"} Dec 09 09:04:52 crc kubenswrapper[4786]: I1209 09:04:52.862938 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:04:52 crc kubenswrapper[4786]: I1209 09:04:52.866491 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="config-reloader" containerID="cri-o://28d67d7a43536dae7361dcd0c49349bdda9667f27b048b12ba5b6e34640537de" gracePeriod=600 Dec 09 09:04:52 crc kubenswrapper[4786]: I1209 09:04:52.868306 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="prometheus" containerID="cri-o://35269a2b95415aa9bd906f17685722e9132b9949ef9b0a71eba0e666d2b32040" gracePeriod=600 Dec 09 09:04:52 crc kubenswrapper[4786]: I1209 09:04:52.868451 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="thanos-sidecar" containerID="cri-o://1beada5bd002c662c115458b43b270b4583f2c75223b3101c5a07fd95d24a37c" gracePeriod=600 Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.070242 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-4pxhc"] Dec 09 09:04:53 crc kubenswrapper[4786]: E1209 09:04:53.071011 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47aa56da-70ed-4ee4-a83c-35c8116a0ec3" containerName="swift-ring-rebalance" Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.071043 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="47aa56da-70ed-4ee4-a83c-35c8116a0ec3" containerName="swift-ring-rebalance" Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.071328 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="47aa56da-70ed-4ee4-a83c-35c8116a0ec3" containerName="swift-ring-rebalance" Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.074091 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-4pxhc" Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.115289 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-4pxhc"] Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.166905 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzl56\" (UniqueName: \"kubernetes.io/projected/e4ab18c7-d1f2-4819-a35d-4e268535e616-kube-api-access-mzl56\") pod \"watcher-db-create-4pxhc\" (UID: \"e4ab18c7-d1f2-4819-a35d-4e268535e616\") " pod="openstack/watcher-db-create-4pxhc" Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.268637 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzl56\" (UniqueName: \"kubernetes.io/projected/e4ab18c7-d1f2-4819-a35d-4e268535e616-kube-api-access-mzl56\") pod \"watcher-db-create-4pxhc\" (UID: \"e4ab18c7-d1f2-4819-a35d-4e268535e616\") " pod="openstack/watcher-db-create-4pxhc" Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.315093 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzl56\" (UniqueName: \"kubernetes.io/projected/e4ab18c7-d1f2-4819-a35d-4e268535e616-kube-api-access-mzl56\") pod \"watcher-db-create-4pxhc\" (UID: \"e4ab18c7-d1f2-4819-a35d-4e268535e616\") " pod="openstack/watcher-db-create-4pxhc" Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.462176 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-4pxhc" Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.896526 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-4pxhc"] Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.920725 4786 generic.go:334] "Generic (PLEG): container finished" podID="3d297c4a-7ce4-4c61-9e72-d4503808c184" containerID="edfe652a450bcb713cd635c780b3593e0729e03e312059058d3d01c33f473c2a" exitCode=0 Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.920824 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ncqnq" event={"ID":"3d297c4a-7ce4-4c61-9e72-d4503808c184","Type":"ContainerDied","Data":"edfe652a450bcb713cd635c780b3593e0729e03e312059058d3d01c33f473c2a"} Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.950097 4786 generic.go:334] "Generic (PLEG): container finished" podID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerID="35269a2b95415aa9bd906f17685722e9132b9949ef9b0a71eba0e666d2b32040" exitCode=0 Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.950135 4786 generic.go:334] "Generic (PLEG): container finished" podID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerID="1beada5bd002c662c115458b43b270b4583f2c75223b3101c5a07fd95d24a37c" exitCode=0 Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.950143 4786 generic.go:334] "Generic (PLEG): container finished" podID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerID="28d67d7a43536dae7361dcd0c49349bdda9667f27b048b12ba5b6e34640537de" exitCode=0 Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.950193 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerDied","Data":"35269a2b95415aa9bd906f17685722e9132b9949ef9b0a71eba0e666d2b32040"} Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.950265 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerDied","Data":"1beada5bd002c662c115458b43b270b4583f2c75223b3101c5a07fd95d24a37c"} Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.950285 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerDied","Data":"28d67d7a43536dae7361dcd0c49349bdda9667f27b048b12ba5b6e34640537de"} Dec 09 09:04:53 crc kubenswrapper[4786]: W1209 09:04:53.957918 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode4ab18c7_d1f2_4819_a35d_4e268535e616.slice/crio-7d43e7c2661dc6c59493c1eed60b5748898c24822a1a55dfea22dad5968f9cfd WatchSource:0}: Error finding container 7d43e7c2661dc6c59493c1eed60b5748898c24822a1a55dfea22dad5968f9cfd: Status 404 returned error can't find the container with id 7d43e7c2661dc6c59493c1eed60b5748898c24822a1a55dfea22dad5968f9cfd Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.970269 4786 generic.go:334] "Generic (PLEG): container finished" podID="fdb0b7af-fae9-45a5-924b-a3507c046c28" containerID="d2c2e20b1347eb5d6e31645fefa42e09577123db609cca223caddfd24a79f4f6" exitCode=0 Dec 09 09:04:53 crc kubenswrapper[4786]: I1209 09:04:53.970337 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-w76vw" event={"ID":"fdb0b7af-fae9-45a5-924b-a3507c046c28","Type":"ContainerDied","Data":"d2c2e20b1347eb5d6e31645fefa42e09577123db609cca223caddfd24a79f4f6"} Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.301440 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.330117 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-tls-assets\") pod \"6c7ace95-341e-4733-af87-8c256ae0d9e6\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.330242 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8vks\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-kube-api-access-p8vks\") pod \"6c7ace95-341e-4733-af87-8c256ae0d9e6\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.330375 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6c7ace95-341e-4733-af87-8c256ae0d9e6-prometheus-metric-storage-rulefiles-0\") pod \"6c7ace95-341e-4733-af87-8c256ae0d9e6\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.330712 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"6c7ace95-341e-4733-af87-8c256ae0d9e6\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.330887 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-web-config\") pod \"6c7ace95-341e-4733-af87-8c256ae0d9e6\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.330945 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-thanos-prometheus-http-client-file\") pod \"6c7ace95-341e-4733-af87-8c256ae0d9e6\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.331003 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6c7ace95-341e-4733-af87-8c256ae0d9e6-config-out\") pod \"6c7ace95-341e-4733-af87-8c256ae0d9e6\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.331110 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-config\") pod \"6c7ace95-341e-4733-af87-8c256ae0d9e6\" (UID: \"6c7ace95-341e-4733-af87-8c256ae0d9e6\") " Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.334954 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c7ace95-341e-4733-af87-8c256ae0d9e6-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "6c7ace95-341e-4733-af87-8c256ae0d9e6" (UID: "6c7ace95-341e-4733-af87-8c256ae0d9e6"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.351503 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-config" (OuterVolumeSpecName: "config") pod "6c7ace95-341e-4733-af87-8c256ae0d9e6" (UID: "6c7ace95-341e-4733-af87-8c256ae0d9e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.353564 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "6c7ace95-341e-4733-af87-8c256ae0d9e6" (UID: "6c7ace95-341e-4733-af87-8c256ae0d9e6"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.365461 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "6c7ace95-341e-4733-af87-8c256ae0d9e6" (UID: "6c7ace95-341e-4733-af87-8c256ae0d9e6"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.366966 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-kube-api-access-p8vks" (OuterVolumeSpecName: "kube-api-access-p8vks") pod "6c7ace95-341e-4733-af87-8c256ae0d9e6" (UID: "6c7ace95-341e-4733-af87-8c256ae0d9e6"). InnerVolumeSpecName "kube-api-access-p8vks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.399800 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c7ace95-341e-4733-af87-8c256ae0d9e6-config-out" (OuterVolumeSpecName: "config-out") pod "6c7ace95-341e-4733-af87-8c256ae0d9e6" (UID: "6c7ace95-341e-4733-af87-8c256ae0d9e6"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.415603 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "6c7ace95-341e-4733-af87-8c256ae0d9e6" (UID: "6c7ace95-341e-4733-af87-8c256ae0d9e6"). InnerVolumeSpecName "pvc-28b31f81-4b46-4930-847d-98cf3cf77e89". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.433864 4786 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.433917 4786 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6c7ace95-341e-4733-af87-8c256ae0d9e6-config-out\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.433928 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.433937 4786 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.433946 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8vks\" (UniqueName: \"kubernetes.io/projected/6c7ace95-341e-4733-af87-8c256ae0d9e6-kube-api-access-p8vks\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.433956 4786 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6c7ace95-341e-4733-af87-8c256ae0d9e6-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.434059 4786 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") on node \"crc\" " Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.467723 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-web-config" (OuterVolumeSpecName: "web-config") pod "6c7ace95-341e-4733-af87-8c256ae0d9e6" (UID: "6c7ace95-341e-4733-af87-8c256ae0d9e6"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.491779 4786 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.492019 4786 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-28b31f81-4b46-4930-847d-98cf3cf77e89" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89") on node "crc" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.536629 4786 reconciler_common.go:293] "Volume detached for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.536727 4786 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6c7ace95-341e-4733-af87-8c256ae0d9e6-web-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.984260 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"6c7ace95-341e-4733-af87-8c256ae0d9e6","Type":"ContainerDied","Data":"92391857df5eb37fb4b3103576c043075717207ae822fd33e997589e60e8e055"} Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.984335 4786 scope.go:117] "RemoveContainer" containerID="35269a2b95415aa9bd906f17685722e9132b9949ef9b0a71eba0e666d2b32040" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.984334 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.987535 4786 generic.go:334] "Generic (PLEG): container finished" podID="e4ab18c7-d1f2-4819-a35d-4e268535e616" containerID="30f6998317760e7b163f42a1f80d6446132f2ea5dea186095670273af5b7770d" exitCode=0 Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.987601 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-4pxhc" event={"ID":"e4ab18c7-d1f2-4819-a35d-4e268535e616","Type":"ContainerDied","Data":"30f6998317760e7b163f42a1f80d6446132f2ea5dea186095670273af5b7770d"} Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.988002 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-4pxhc" event={"ID":"e4ab18c7-d1f2-4819-a35d-4e268535e616","Type":"ContainerStarted","Data":"7d43e7c2661dc6c59493c1eed60b5748898c24822a1a55dfea22dad5968f9cfd"} Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.990052 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.990318 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.990526 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.991378 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4cca8363f81421e3823d923dacff94c757cc32e0b6c3dae834013b3d2653acca"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:04:54 crc kubenswrapper[4786]: I1209 09:04:54.991556 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://4cca8363f81421e3823d923dacff94c757cc32e0b6c3dae834013b3d2653acca" gracePeriod=600 Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.080825 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.089829 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.095410 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.107396 4786 scope.go:117] "RemoveContainer" containerID="1beada5bd002c662c115458b43b270b4583f2c75223b3101c5a07fd95d24a37c" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.165110 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:04:55 crc kubenswrapper[4786]: E1209 09:04:55.168968 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="prometheus" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.168987 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="prometheus" Dec 09 09:04:55 crc kubenswrapper[4786]: E1209 09:04:55.169018 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="config-reloader" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.169027 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="config-reloader" Dec 09 09:04:55 crc kubenswrapper[4786]: E1209 09:04:55.169041 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="init-config-reloader" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.169049 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="init-config-reloader" Dec 09 09:04:55 crc kubenswrapper[4786]: E1209 09:04:55.169107 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="thanos-sidecar" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.169115 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="thanos-sidecar" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.169356 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="config-reloader" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.169388 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="prometheus" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.169445 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" containerName="thanos-sidecar" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.193757 4786 scope.go:117] "RemoveContainer" containerID="28d67d7a43536dae7361dcd0c49349bdda9667f27b048b12ba5b6e34640537de" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.195719 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.206462 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.207187 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.207576 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.207888 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.208110 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7szbp" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.211924 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.215761 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.235894 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c7ace95-341e-4733-af87-8c256ae0d9e6" path="/var/lib/kubelet/pods/6c7ace95-341e-4733-af87-8c256ae0d9e6/volumes" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.271002 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.283248 4786 scope.go:117] "RemoveContainer" containerID="2054da1a162171dcb6211e2d02c2495d39c9f333dabfa9adf2ca237eb1bb505e" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.371767 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-config\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.371816 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.371878 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pprx5\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-kube-api-access-pprx5\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.371906 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.371942 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.371975 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.372008 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/374a9777-590c-4ba3-afd5-9287abcd72a0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.372028 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/374a9777-590c-4ba3-afd5-9287abcd72a0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.372076 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.372100 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.372138 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.473608 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.473696 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.473752 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/374a9777-590c-4ba3-afd5-9287abcd72a0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.473782 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/374a9777-590c-4ba3-afd5-9287abcd72a0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.473851 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.473894 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.473931 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.473979 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-config\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.474024 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.474088 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pprx5\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-kube-api-access-pprx5\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.474125 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.482114 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.483269 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/374a9777-590c-4ba3-afd5-9287abcd72a0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.484100 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/374a9777-590c-4ba3-afd5-9287abcd72a0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.488391 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-config\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.490520 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.491069 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.491398 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.492353 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.492507 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8cfbd567b829700af2d6ada5c2de407e7db840737c905f740b19ad0b115df38c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.513738 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.518893 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.525930 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pprx5\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-kube-api-access-pprx5\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.532333 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w76vw" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.630560 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.678807 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfv4k\" (UniqueName: \"kubernetes.io/projected/fdb0b7af-fae9-45a5-924b-a3507c046c28-kube-api-access-zfv4k\") pod \"fdb0b7af-fae9-45a5-924b-a3507c046c28\" (UID: \"fdb0b7af-fae9-45a5-924b-a3507c046c28\") " Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.682897 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ncqnq" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.691610 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdb0b7af-fae9-45a5-924b-a3507c046c28-kube-api-access-zfv4k" (OuterVolumeSpecName: "kube-api-access-zfv4k") pod "fdb0b7af-fae9-45a5-924b-a3507c046c28" (UID: "fdb0b7af-fae9-45a5-924b-a3507c046c28"). InnerVolumeSpecName "kube-api-access-zfv4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.785985 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-229mk\" (UniqueName: \"kubernetes.io/projected/3d297c4a-7ce4-4c61-9e72-d4503808c184-kube-api-access-229mk\") pod \"3d297c4a-7ce4-4c61-9e72-d4503808c184\" (UID: \"3d297c4a-7ce4-4c61-9e72-d4503808c184\") " Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.786948 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfv4k\" (UniqueName: \"kubernetes.io/projected/fdb0b7af-fae9-45a5-924b-a3507c046c28-kube-api-access-zfv4k\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.789513 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d297c4a-7ce4-4c61-9e72-d4503808c184-kube-api-access-229mk" (OuterVolumeSpecName: "kube-api-access-229mk") pod "3d297c4a-7ce4-4c61-9e72-d4503808c184" (UID: "3d297c4a-7ce4-4c61-9e72-d4503808c184"). InnerVolumeSpecName "kube-api-access-229mk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.830514 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.889528 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-229mk\" (UniqueName: \"kubernetes.io/projected/3d297c4a-7ce4-4c61-9e72-d4503808c184-kube-api-access-229mk\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:55 crc kubenswrapper[4786]: I1209 09:04:55.994367 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.002074 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/ed655e06-206a-407f-8651-56e042d74cd1-etc-swift\") pod \"swift-storage-0\" (UID: \"ed655e06-206a-407f-8651-56e042d74cd1\") " pod="openstack/swift-storage-0" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.025685 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="4cca8363f81421e3823d923dacff94c757cc32e0b6c3dae834013b3d2653acca" exitCode=0 Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.025760 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"4cca8363f81421e3823d923dacff94c757cc32e0b6c3dae834013b3d2653acca"} Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.025800 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"6861e8a37988a9e19bbc4cef34d4d5e8b2d44819ea8091141fe025d3c9cd2383"} Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.025825 4786 scope.go:117] "RemoveContainer" containerID="23158cb54ec78bf37e1faebd995cbf384dcd1c26c5f2777f244e3c9e75de0774" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.033526 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-w76vw" event={"ID":"fdb0b7af-fae9-45a5-924b-a3507c046c28","Type":"ContainerDied","Data":"fe32d8889b5ded4886eed4d42c078bf23a659e61121573ad72e6fedfbb356f7f"} Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.033560 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe32d8889b5ded4886eed4d42c078bf23a659e61121573ad72e6fedfbb356f7f" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.033653 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-w76vw" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.049601 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ncqnq" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.057569 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ncqnq" event={"ID":"3d297c4a-7ce4-4c61-9e72-d4503808c184","Type":"ContainerDied","Data":"cc9a8e37c6c6c4185fd64fbc0e558752ef60ceedb83d31b3e368d56bf6be4286"} Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.057628 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc9a8e37c6c6c4185fd64fbc0e558752ef60ceedb83d31b3e368d56bf6be4286" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.111903 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.272007 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.460512 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-4pxhc" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.520439 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mzl56\" (UniqueName: \"kubernetes.io/projected/e4ab18c7-d1f2-4819-a35d-4e268535e616-kube-api-access-mzl56\") pod \"e4ab18c7-d1f2-4819-a35d-4e268535e616\" (UID: \"e4ab18c7-d1f2-4819-a35d-4e268535e616\") " Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.525514 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.533634 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ab18c7-d1f2-4819-a35d-4e268535e616-kube-api-access-mzl56" (OuterVolumeSpecName: "kube-api-access-mzl56") pod "e4ab18c7-d1f2-4819-a35d-4e268535e616" (UID: "e4ab18c7-d1f2-4819-a35d-4e268535e616"). InnerVolumeSpecName "kube-api-access-mzl56". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.622338 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mzl56\" (UniqueName: \"kubernetes.io/projected/e4ab18c7-d1f2-4819-a35d-4e268535e616-kube-api-access-mzl56\") on node \"crc\" DevicePath \"\"" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.935048 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Dec 09 09:04:56 crc kubenswrapper[4786]: I1209 09:04:56.991917 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 09 09:04:57 crc kubenswrapper[4786]: I1209 09:04:57.170243 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"d77ae017f2074cd3d541aa9b44798770d3814e032a67dbd78c28767ca019c476"} Dec 09 09:04:57 crc kubenswrapper[4786]: I1209 09:04:57.174063 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-4pxhc" event={"ID":"e4ab18c7-d1f2-4819-a35d-4e268535e616","Type":"ContainerDied","Data":"7d43e7c2661dc6c59493c1eed60b5748898c24822a1a55dfea22dad5968f9cfd"} Dec 09 09:04:57 crc kubenswrapper[4786]: I1209 09:04:57.174123 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d43e7c2661dc6c59493c1eed60b5748898c24822a1a55dfea22dad5968f9cfd" Dec 09 09:04:57 crc kubenswrapper[4786]: I1209 09:04:57.174153 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-4pxhc" Dec 09 09:04:57 crc kubenswrapper[4786]: I1209 09:04:57.324114 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerStarted","Data":"5a64d1b61d664e2c57b77efa93f64c49d750de3503c6faf75c75d7bb480e4f96"} Dec 09 09:04:57 crc kubenswrapper[4786]: I1209 09:04:57.471880 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="6c5ae8f0-bfa8-4fe2-81c3-289021674179" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Dec 09 09:04:59 crc kubenswrapper[4786]: I1209 09:04:59.270692 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"fb251cf5e8f42f61048192c951dac00f5173445c6e3b0bd3c5c0691c05944281"} Dec 09 09:04:59 crc kubenswrapper[4786]: I1209 09:04:59.273194 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"49a43ad7a66826e2b0ed825261c35d5d6e457b814a305b5a5dfa97945cb70f63"} Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.284210 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerStarted","Data":"4674188bcc016e82af59b634d302e4ab1ff996e7550beb31e346de4712fb78f0"} Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.290607 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"073dddd41d4f7cfe7a1312a334370161b7bcaa83ebd49f691b63a11d2892d924"} Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.290687 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"6f1aa9c214011b87134517d1fbbbc920be0c0092a587a9dd66ae846c848971dd"} Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.526741 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7bd4-account-create-kl59b"] Dec 09 09:05:00 crc kubenswrapper[4786]: E1209 09:05:00.527118 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ab18c7-d1f2-4819-a35d-4e268535e616" containerName="mariadb-database-create" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.527131 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ab18c7-d1f2-4819-a35d-4e268535e616" containerName="mariadb-database-create" Dec 09 09:05:00 crc kubenswrapper[4786]: E1209 09:05:00.527146 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdb0b7af-fae9-45a5-924b-a3507c046c28" containerName="mariadb-database-create" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.527152 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdb0b7af-fae9-45a5-924b-a3507c046c28" containerName="mariadb-database-create" Dec 09 09:05:00 crc kubenswrapper[4786]: E1209 09:05:00.527171 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d297c4a-7ce4-4c61-9e72-d4503808c184" containerName="mariadb-database-create" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.527178 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d297c4a-7ce4-4c61-9e72-d4503808c184" containerName="mariadb-database-create" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.527355 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdb0b7af-fae9-45a5-924b-a3507c046c28" containerName="mariadb-database-create" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.527382 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ab18c7-d1f2-4819-a35d-4e268535e616" containerName="mariadb-database-create" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.527396 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d297c4a-7ce4-4c61-9e72-d4503808c184" containerName="mariadb-database-create" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.528025 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bd4-account-create-kl59b" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.530980 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.571882 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bd4-account-create-kl59b"] Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.836296 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhvd5\" (UniqueName: \"kubernetes.io/projected/36ed95ac-bc8e-4195-b0a1-a66bb48df015-kube-api-access-fhvd5\") pod \"keystone-7bd4-account-create-kl59b\" (UID: \"36ed95ac-bc8e-4195-b0a1-a66bb48df015\") " pod="openstack/keystone-7bd4-account-create-kl59b" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.945154 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhvd5\" (UniqueName: \"kubernetes.io/projected/36ed95ac-bc8e-4195-b0a1-a66bb48df015-kube-api-access-fhvd5\") pod \"keystone-7bd4-account-create-kl59b\" (UID: \"36ed95ac-bc8e-4195-b0a1-a66bb48df015\") " pod="openstack/keystone-7bd4-account-create-kl59b" Dec 09 09:05:00 crc kubenswrapper[4786]: I1209 09:05:00.996442 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhvd5\" (UniqueName: \"kubernetes.io/projected/36ed95ac-bc8e-4195-b0a1-a66bb48df015-kube-api-access-fhvd5\") pod \"keystone-7bd4-account-create-kl59b\" (UID: \"36ed95ac-bc8e-4195-b0a1-a66bb48df015\") " pod="openstack/keystone-7bd4-account-create-kl59b" Dec 09 09:05:01 crc kubenswrapper[4786]: I1209 09:05:01.149057 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bd4-account-create-kl59b" Dec 09 09:05:03 crc kubenswrapper[4786]: W1209 09:05:03.084492 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36ed95ac_bc8e_4195_b0a1_a66bb48df015.slice/crio-10a18c12a5ca68a8e9c6260e882e2df3b04f2e8428debd7223b5b93a02297d67 WatchSource:0}: Error finding container 10a18c12a5ca68a8e9c6260e882e2df3b04f2e8428debd7223b5b93a02297d67: Status 404 returned error can't find the container with id 10a18c12a5ca68a8e9c6260e882e2df3b04f2e8428debd7223b5b93a02297d67 Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.086081 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7bd4-account-create-kl59b"] Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.261901 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-c1d9-account-create-x7vwp"] Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.358309 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-c1d9-account-create-x7vwp" Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.368862 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.372388 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-c1d9-account-create-x7vwp"] Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.419870 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bd4-account-create-kl59b" event={"ID":"36ed95ac-bc8e-4195-b0a1-a66bb48df015","Type":"ContainerStarted","Data":"10a18c12a5ca68a8e9c6260e882e2df3b04f2e8428debd7223b5b93a02297d67"} Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.443863 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"1a783cd9c4467bb2558a4a3e8d94efeac66a9fc1cc013a083f4ce8171580814a"} Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.443953 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"40ae7328eb562f279f27fa06ca1e0d99a9d94e57b0da94b2520a25273a57cea2"} Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.464146 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqqd9\" (UniqueName: \"kubernetes.io/projected/a6654fc3-1c4f-49ea-9dc4-ca16f842bead-kube-api-access-kqqd9\") pod \"watcher-c1d9-account-create-x7vwp\" (UID: \"a6654fc3-1c4f-49ea-9dc4-ca16f842bead\") " pod="openstack/watcher-c1d9-account-create-x7vwp" Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.565904 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqqd9\" (UniqueName: \"kubernetes.io/projected/a6654fc3-1c4f-49ea-9dc4-ca16f842bead-kube-api-access-kqqd9\") pod \"watcher-c1d9-account-create-x7vwp\" (UID: \"a6654fc3-1c4f-49ea-9dc4-ca16f842bead\") " pod="openstack/watcher-c1d9-account-create-x7vwp" Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.595264 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqqd9\" (UniqueName: \"kubernetes.io/projected/a6654fc3-1c4f-49ea-9dc4-ca16f842bead-kube-api-access-kqqd9\") pod \"watcher-c1d9-account-create-x7vwp\" (UID: \"a6654fc3-1c4f-49ea-9dc4-ca16f842bead\") " pod="openstack/watcher-c1d9-account-create-x7vwp" Dec 09 09:05:03 crc kubenswrapper[4786]: I1209 09:05:03.720878 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-c1d9-account-create-x7vwp" Dec 09 09:05:04 crc kubenswrapper[4786]: I1209 09:05:04.248051 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-c1d9-account-create-x7vwp"] Dec 09 09:05:04 crc kubenswrapper[4786]: I1209 09:05:04.459226 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-c1d9-account-create-x7vwp" event={"ID":"a6654fc3-1c4f-49ea-9dc4-ca16f842bead","Type":"ContainerStarted","Data":"6fb596f8627a2c2d4fe871796ee91ee4f9e3b062f354291fcd015f66bd0b0bdd"} Dec 09 09:05:04 crc kubenswrapper[4786]: I1209 09:05:04.459314 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-c1d9-account-create-x7vwp" event={"ID":"a6654fc3-1c4f-49ea-9dc4-ca16f842bead","Type":"ContainerStarted","Data":"90e2daddbf3eac9f72466e09cc592298cce5cb4a428c031d433412ad1a85b9a7"} Dec 09 09:05:04 crc kubenswrapper[4786]: I1209 09:05:04.466145 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"3a38e3efd254a769c7a7468a18b30b0ba17d2dbf2505a9e9863c4eada3a11e27"} Dec 09 09:05:04 crc kubenswrapper[4786]: I1209 09:05:04.466204 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"b6e57999b9c3a5e8810dbda0581002e060664f69c8bec581eda55b914126aa89"} Dec 09 09:05:04 crc kubenswrapper[4786]: I1209 09:05:04.468535 4786 generic.go:334] "Generic (PLEG): container finished" podID="36ed95ac-bc8e-4195-b0a1-a66bb48df015" containerID="97222087f56ad7df9af83f27397db11b72a8fae0f0474b3458204135a25c40f0" exitCode=0 Dec 09 09:05:04 crc kubenswrapper[4786]: I1209 09:05:04.468607 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bd4-account-create-kl59b" event={"ID":"36ed95ac-bc8e-4195-b0a1-a66bb48df015","Type":"ContainerDied","Data":"97222087f56ad7df9af83f27397db11b72a8fae0f0474b3458204135a25c40f0"} Dec 09 09:05:04 crc kubenswrapper[4786]: I1209 09:05:04.494853 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-c1d9-account-create-x7vwp" podStartSLOduration=1.494823 podStartE2EDuration="1.494823s" podCreationTimestamp="2025-12-09 09:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:04.48434644 +0000 UTC m=+1270.367967676" watchObservedRunningTime="2025-12-09 09:05:04.494823 +0000 UTC m=+1270.378444226" Dec 09 09:05:05 crc kubenswrapper[4786]: I1209 09:05:05.783059 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"4fee35fc110f5e986221b024c16f4bf064a33b818022b54519eb4130b0ee36e3"} Dec 09 09:05:05 crc kubenswrapper[4786]: I1209 09:05:05.784195 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"41181e86655f197da262a8e826feceefbfeecbee9cfe537794687b3747a954b3"} Dec 09 09:05:05 crc kubenswrapper[4786]: I1209 09:05:05.784219 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"41f36cf5a87e41d554b80a38b5cbbdde7935946d365230239866f35fff969c84"} Dec 09 09:05:05 crc kubenswrapper[4786]: I1209 09:05:05.805128 4786 generic.go:334] "Generic (PLEG): container finished" podID="a6654fc3-1c4f-49ea-9dc4-ca16f842bead" containerID="6fb596f8627a2c2d4fe871796ee91ee4f9e3b062f354291fcd015f66bd0b0bdd" exitCode=0 Dec 09 09:05:05 crc kubenswrapper[4786]: I1209 09:05:05.806190 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-c1d9-account-create-x7vwp" event={"ID":"a6654fc3-1c4f-49ea-9dc4-ca16f842bead","Type":"ContainerDied","Data":"6fb596f8627a2c2d4fe871796ee91ee4f9e3b062f354291fcd015f66bd0b0bdd"} Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.272296 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bd4-account-create-kl59b" Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.324304 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhvd5\" (UniqueName: \"kubernetes.io/projected/36ed95ac-bc8e-4195-b0a1-a66bb48df015-kube-api-access-fhvd5\") pod \"36ed95ac-bc8e-4195-b0a1-a66bb48df015\" (UID: \"36ed95ac-bc8e-4195-b0a1-a66bb48df015\") " Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.333588 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36ed95ac-bc8e-4195-b0a1-a66bb48df015-kube-api-access-fhvd5" (OuterVolumeSpecName: "kube-api-access-fhvd5") pod "36ed95ac-bc8e-4195-b0a1-a66bb48df015" (UID: "36ed95ac-bc8e-4195-b0a1-a66bb48df015"). InnerVolumeSpecName "kube-api-access-fhvd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.426788 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhvd5\" (UniqueName: \"kubernetes.io/projected/36ed95ac-bc8e-4195-b0a1-a66bb48df015-kube-api-access-fhvd5\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.523761 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.921214 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"a5ee4628cf349dadb2954090d4532e878b427641d76b83a80c9c42085396bbce"} Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.921268 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"067b44f3a8a1fadcb68dece9be59ebd7a0e4435c87a2e1d3fce68d5bb6627de7"} Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.923405 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7bd4-account-create-kl59b" Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.924258 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7bd4-account-create-kl59b" event={"ID":"36ed95ac-bc8e-4195-b0a1-a66bb48df015","Type":"ContainerDied","Data":"10a18c12a5ca68a8e9c6260e882e2df3b04f2e8428debd7223b5b93a02297d67"} Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.924396 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a18c12a5ca68a8e9c6260e882e2df3b04f2e8428debd7223b5b93a02297d67" Dec 09 09:05:06 crc kubenswrapper[4786]: I1209 09:05:06.933849 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.129416 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-84bdh"] Dec 09 09:05:07 crc kubenswrapper[4786]: E1209 09:05:07.130165 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36ed95ac-bc8e-4195-b0a1-a66bb48df015" containerName="mariadb-account-create" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.130188 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="36ed95ac-bc8e-4195-b0a1-a66bb48df015" containerName="mariadb-account-create" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.130364 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="36ed95ac-bc8e-4195-b0a1-a66bb48df015" containerName="mariadb-account-create" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.131028 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84bdh" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.173208 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-84bdh"] Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.188353 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v84m\" (UniqueName: \"kubernetes.io/projected/19dcdb84-6784-4719-ba48-a8b0ec1f9af2-kube-api-access-8v84m\") pod \"barbican-db-create-84bdh\" (UID: \"19dcdb84-6784-4719-ba48-a8b0ec1f9af2\") " pod="openstack/barbican-db-create-84bdh" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.229568 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-4hdd8"] Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.230854 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4hdd8" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.264333 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4hdd8"] Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.293185 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v84m\" (UniqueName: \"kubernetes.io/projected/19dcdb84-6784-4719-ba48-a8b0ec1f9af2-kube-api-access-8v84m\") pod \"barbican-db-create-84bdh\" (UID: \"19dcdb84-6784-4719-ba48-a8b0ec1f9af2\") " pod="openstack/barbican-db-create-84bdh" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.293398 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m9qg\" (UniqueName: \"kubernetes.io/projected/80996405-8157-449e-b510-652bbe2f3fb7-kube-api-access-4m9qg\") pod \"cinder-db-create-4hdd8\" (UID: \"80996405-8157-449e-b510-652bbe2f3fb7\") " pod="openstack/cinder-db-create-4hdd8" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.331363 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v84m\" (UniqueName: \"kubernetes.io/projected/19dcdb84-6784-4719-ba48-a8b0ec1f9af2-kube-api-access-8v84m\") pod \"barbican-db-create-84bdh\" (UID: \"19dcdb84-6784-4719-ba48-a8b0ec1f9af2\") " pod="openstack/barbican-db-create-84bdh" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.395388 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m9qg\" (UniqueName: \"kubernetes.io/projected/80996405-8157-449e-b510-652bbe2f3fb7-kube-api-access-4m9qg\") pod \"cinder-db-create-4hdd8\" (UID: \"80996405-8157-449e-b510-652bbe2f3fb7\") " pod="openstack/cinder-db-create-4hdd8" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.422328 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m9qg\" (UniqueName: \"kubernetes.io/projected/80996405-8157-449e-b510-652bbe2f3fb7-kube-api-access-4m9qg\") pod \"cinder-db-create-4hdd8\" (UID: \"80996405-8157-449e-b510-652bbe2f3fb7\") " pod="openstack/cinder-db-create-4hdd8" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.466318 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-c1d9-account-create-x7vwp" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.469581 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="6c5ae8f0-bfa8-4fe2-81c3-289021674179" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.497327 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqqd9\" (UniqueName: \"kubernetes.io/projected/a6654fc3-1c4f-49ea-9dc4-ca16f842bead-kube-api-access-kqqd9\") pod \"a6654fc3-1c4f-49ea-9dc4-ca16f842bead\" (UID: \"a6654fc3-1c4f-49ea-9dc4-ca16f842bead\") " Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.516626 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6654fc3-1c4f-49ea-9dc4-ca16f842bead-kube-api-access-kqqd9" (OuterVolumeSpecName: "kube-api-access-kqqd9") pod "a6654fc3-1c4f-49ea-9dc4-ca16f842bead" (UID: "a6654fc3-1c4f-49ea-9dc4-ca16f842bead"). InnerVolumeSpecName "kube-api-access-kqqd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.521843 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84bdh" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.597072 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4hdd8" Dec 09 09:05:07 crc kubenswrapper[4786]: I1209 09:05:07.599086 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqqd9\" (UniqueName: \"kubernetes.io/projected/a6654fc3-1c4f-49ea-9dc4-ca16f842bead-kube-api-access-kqqd9\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:07.947056 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-c1d9-account-create-x7vwp" event={"ID":"a6654fc3-1c4f-49ea-9dc4-ca16f842bead","Type":"ContainerDied","Data":"90e2daddbf3eac9f72466e09cc592298cce5cb4a428c031d433412ad1a85b9a7"} Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:07.947397 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90e2daddbf3eac9f72466e09cc592298cce5cb4a428c031d433412ad1a85b9a7" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:07.947501 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-c1d9-account-create-x7vwp" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:07.979303 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"1140391e5d6c65af428c5059bc85ff6b7cf7c7a38d4f3e696a07de37f400b9e2"} Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:07.979358 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"ed655e06-206a-407f-8651-56e042d74cd1","Type":"ContainerStarted","Data":"c583affca255dde0c0fba42d234f33f04a3e6119bb2849b9fd05efd8cad6d468"} Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.040922 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=38.368712694 podStartE2EDuration="46.040892918s" podCreationTimestamp="2025-12-09 09:04:22 +0000 UTC" firstStartedPulling="2025-12-09 09:04:57.08760023 +0000 UTC m=+1262.971221456" lastFinishedPulling="2025-12-09 09:05:04.759780454 +0000 UTC m=+1270.643401680" observedRunningTime="2025-12-09 09:05:08.033212178 +0000 UTC m=+1273.916833404" watchObservedRunningTime="2025-12-09 09:05:08.040892918 +0000 UTC m=+1273.924514164" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.527596 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bd8458cdf-vnbxl"] Dec 09 09:05:08 crc kubenswrapper[4786]: E1209 09:05:08.530031 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6654fc3-1c4f-49ea-9dc4-ca16f842bead" containerName="mariadb-account-create" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.530050 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6654fc3-1c4f-49ea-9dc4-ca16f842bead" containerName="mariadb-account-create" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.530235 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6654fc3-1c4f-49ea-9dc4-ca16f842bead" containerName="mariadb-account-create" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.531287 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.533732 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.558183 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bd8458cdf-vnbxl"] Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.715905 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-swift-storage-0\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.716079 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwq25\" (UniqueName: \"kubernetes.io/projected/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-kube-api-access-jwq25\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.716141 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-config\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.716177 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-sb\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.716212 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-svc\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.716245 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.818262 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-config\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.818325 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-sb\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.818357 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-svc\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.818381 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.818443 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-swift-storage-0\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.818553 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwq25\" (UniqueName: \"kubernetes.io/projected/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-kube-api-access-jwq25\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.819257 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-config\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.819590 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-sb\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.820057 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-svc\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.820300 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-swift-storage-0\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.820656 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-nb\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.840101 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwq25\" (UniqueName: \"kubernetes.io/projected/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-kube-api-access-jwq25\") pod \"dnsmasq-dns-5bd8458cdf-vnbxl\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.853348 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:08 crc kubenswrapper[4786]: I1209 09:05:08.909790 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-84bdh"] Dec 09 09:05:08 crc kubenswrapper[4786]: W1209 09:05:08.934135 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19dcdb84_6784_4719_ba48_a8b0ec1f9af2.slice/crio-4a39a60be07273818e510b72c8797ea91bbbc714b028e144c15bc10a411f3db2 WatchSource:0}: Error finding container 4a39a60be07273818e510b72c8797ea91bbbc714b028e144c15bc10a411f3db2: Status 404 returned error can't find the container with id 4a39a60be07273818e510b72c8797ea91bbbc714b028e144c15bc10a411f3db2 Dec 09 09:05:09 crc kubenswrapper[4786]: I1209 09:05:09.028768 4786 generic.go:334] "Generic (PLEG): container finished" podID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerID="4674188bcc016e82af59b634d302e4ab1ff996e7550beb31e346de4712fb78f0" exitCode=0 Dec 09 09:05:09 crc kubenswrapper[4786]: I1209 09:05:09.028875 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerDied","Data":"4674188bcc016e82af59b634d302e4ab1ff996e7550beb31e346de4712fb78f0"} Dec 09 09:05:09 crc kubenswrapper[4786]: I1209 09:05:09.041254 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84bdh" event={"ID":"19dcdb84-6784-4719-ba48-a8b0ec1f9af2","Type":"ContainerStarted","Data":"4a39a60be07273818e510b72c8797ea91bbbc714b028e144c15bc10a411f3db2"} Dec 09 09:05:09 crc kubenswrapper[4786]: I1209 09:05:09.066184 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-4hdd8"] Dec 09 09:05:09 crc kubenswrapper[4786]: I1209 09:05:09.431581 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bd8458cdf-vnbxl"] Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.053108 4786 generic.go:334] "Generic (PLEG): container finished" podID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerID="e9d6d67964600b9ba3d57f4da1a778e11c32ed5aaff0351ef0cd44fb664649ad" exitCode=0 Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.053217 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" event={"ID":"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c","Type":"ContainerDied","Data":"e9d6d67964600b9ba3d57f4da1a778e11c32ed5aaff0351ef0cd44fb664649ad"} Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.053476 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" event={"ID":"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c","Type":"ContainerStarted","Data":"6cc33d4b2c934fa8e6d1f4833c7aad4dabb4699b8bbaa0e063b4d619c515c61e"} Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.054995 4786 generic.go:334] "Generic (PLEG): container finished" podID="80996405-8157-449e-b510-652bbe2f3fb7" containerID="87bddfb1511d4583dcb2ca91339ed9dec648aa69b077d761fa9436c10bffe3a9" exitCode=0 Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.055025 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4hdd8" event={"ID":"80996405-8157-449e-b510-652bbe2f3fb7","Type":"ContainerDied","Data":"87bddfb1511d4583dcb2ca91339ed9dec648aa69b077d761fa9436c10bffe3a9"} Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.055059 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4hdd8" event={"ID":"80996405-8157-449e-b510-652bbe2f3fb7","Type":"ContainerStarted","Data":"9a04db12ca104a83798f90877e49394916de0c47ffb028bf9bebed213aea2250"} Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.058321 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerStarted","Data":"bf539cb1568e0ba52ee652abbafe187288bc90b74658acbf805efbb7e9013167"} Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.059924 4786 generic.go:334] "Generic (PLEG): container finished" podID="19dcdb84-6784-4719-ba48-a8b0ec1f9af2" containerID="67711bcaf37edb9c076df4e51de42f8d3c1bc3b83ffd81de6991808506e32e8f" exitCode=0 Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.059971 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84bdh" event={"ID":"19dcdb84-6784-4719-ba48-a8b0ec1f9af2","Type":"ContainerDied","Data":"67711bcaf37edb9c076df4e51de42f8d3c1bc3b83ffd81de6991808506e32e8f"} Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.543052 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6af3-account-create-bx46s"] Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.544792 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6af3-account-create-bx46s" Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.547135 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.554079 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6af3-account-create-bx46s"] Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.554094 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5fc\" (UniqueName: \"kubernetes.io/projected/a2c6015d-f922-417e-8b64-6d2544fd8d32-kube-api-access-mj5fc\") pod \"placement-6af3-account-create-bx46s\" (UID: \"a2c6015d-f922-417e-8b64-6d2544fd8d32\") " pod="openstack/placement-6af3-account-create-bx46s" Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.655643 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj5fc\" (UniqueName: \"kubernetes.io/projected/a2c6015d-f922-417e-8b64-6d2544fd8d32-kube-api-access-mj5fc\") pod \"placement-6af3-account-create-bx46s\" (UID: \"a2c6015d-f922-417e-8b64-6d2544fd8d32\") " pod="openstack/placement-6af3-account-create-bx46s" Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.675485 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj5fc\" (UniqueName: \"kubernetes.io/projected/a2c6015d-f922-417e-8b64-6d2544fd8d32-kube-api-access-mj5fc\") pod \"placement-6af3-account-create-bx46s\" (UID: \"a2c6015d-f922-417e-8b64-6d2544fd8d32\") " pod="openstack/placement-6af3-account-create-bx46s" Dec 09 09:05:10 crc kubenswrapper[4786]: I1209 09:05:10.868832 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6af3-account-create-bx46s" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.080258 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" event={"ID":"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c","Type":"ContainerStarted","Data":"4d8633ed1b943d2e8543a48ffd3f0783b1cd8ad529d1cd89afb200d979505eb7"} Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.080802 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.283848 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" podStartSLOduration=3.283814327 podStartE2EDuration="3.283814327s" podCreationTimestamp="2025-12-09 09:05:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:11.115961945 +0000 UTC m=+1276.999583191" watchObservedRunningTime="2025-12-09 09:05:11.283814327 +0000 UTC m=+1277.167435553" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.313221 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6af3-account-create-bx46s"] Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.320147 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-lmgzp"] Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.322124 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.328399 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lmgzp"] Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.334032 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.334308 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.334381 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.334980 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrhs7" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.383948 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-combined-ca-bundle\") pod \"keystone-db-sync-lmgzp\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.384066 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm69z\" (UniqueName: \"kubernetes.io/projected/da3d31ea-e187-4e92-9553-10fcc78ce65c-kube-api-access-qm69z\") pod \"keystone-db-sync-lmgzp\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.384120 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-config-data\") pod \"keystone-db-sync-lmgzp\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.486777 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm69z\" (UniqueName: \"kubernetes.io/projected/da3d31ea-e187-4e92-9553-10fcc78ce65c-kube-api-access-qm69z\") pod \"keystone-db-sync-lmgzp\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.493737 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-config-data\") pod \"keystone-db-sync-lmgzp\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.494151 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-combined-ca-bundle\") pod \"keystone-db-sync-lmgzp\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.510874 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm69z\" (UniqueName: \"kubernetes.io/projected/da3d31ea-e187-4e92-9553-10fcc78ce65c-kube-api-access-qm69z\") pod \"keystone-db-sync-lmgzp\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.589296 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-config-data\") pod \"keystone-db-sync-lmgzp\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.594505 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-combined-ca-bundle\") pod \"keystone-db-sync-lmgzp\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.716364 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.776642 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4hdd8" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.801146 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4m9qg\" (UniqueName: \"kubernetes.io/projected/80996405-8157-449e-b510-652bbe2f3fb7-kube-api-access-4m9qg\") pod \"80996405-8157-449e-b510-652bbe2f3fb7\" (UID: \"80996405-8157-449e-b510-652bbe2f3fb7\") " Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.807816 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80996405-8157-449e-b510-652bbe2f3fb7-kube-api-access-4m9qg" (OuterVolumeSpecName: "kube-api-access-4m9qg") pod "80996405-8157-449e-b510-652bbe2f3fb7" (UID: "80996405-8157-449e-b510-652bbe2f3fb7"). InnerVolumeSpecName "kube-api-access-4m9qg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.904523 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4m9qg\" (UniqueName: \"kubernetes.io/projected/80996405-8157-449e-b510-652bbe2f3fb7-kube-api-access-4m9qg\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:11 crc kubenswrapper[4786]: I1209 09:05:11.976001 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84bdh" Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.006450 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v84m\" (UniqueName: \"kubernetes.io/projected/19dcdb84-6784-4719-ba48-a8b0ec1f9af2-kube-api-access-8v84m\") pod \"19dcdb84-6784-4719-ba48-a8b0ec1f9af2\" (UID: \"19dcdb84-6784-4719-ba48-a8b0ec1f9af2\") " Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.087802 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19dcdb84-6784-4719-ba48-a8b0ec1f9af2-kube-api-access-8v84m" (OuterVolumeSpecName: "kube-api-access-8v84m") pod "19dcdb84-6784-4719-ba48-a8b0ec1f9af2" (UID: "19dcdb84-6784-4719-ba48-a8b0ec1f9af2"). InnerVolumeSpecName "kube-api-access-8v84m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.108356 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6af3-account-create-bx46s" event={"ID":"a2c6015d-f922-417e-8b64-6d2544fd8d32","Type":"ContainerStarted","Data":"c8a365f40c6a4f27f22d98566e95aa51f5968fc6687540db2d7b0e1e8f9a9902"} Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.110521 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v84m\" (UniqueName: \"kubernetes.io/projected/19dcdb84-6784-4719-ba48-a8b0ec1f9af2-kube-api-access-8v84m\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.124127 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-4hdd8" event={"ID":"80996405-8157-449e-b510-652bbe2f3fb7","Type":"ContainerDied","Data":"9a04db12ca104a83798f90877e49394916de0c47ffb028bf9bebed213aea2250"} Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.124191 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a04db12ca104a83798f90877e49394916de0c47ffb028bf9bebed213aea2250" Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.124190 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-4hdd8" Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.136849 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-84bdh" Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.143374 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-84bdh" event={"ID":"19dcdb84-6784-4719-ba48-a8b0ec1f9af2","Type":"ContainerDied","Data":"4a39a60be07273818e510b72c8797ea91bbbc714b028e144c15bc10a411f3db2"} Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.143470 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a39a60be07273818e510b72c8797ea91bbbc714b028e144c15bc10a411f3db2" Dec 09 09:05:12 crc kubenswrapper[4786]: I1209 09:05:12.585310 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-lmgzp"] Dec 09 09:05:13 crc kubenswrapper[4786]: I1209 09:05:13.150394 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerStarted","Data":"130033b0d8d8a0b705bc89a996c4d79473f4fe0f05973269c2cf5447598e005f"} Dec 09 09:05:13 crc kubenswrapper[4786]: I1209 09:05:13.150769 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerStarted","Data":"a8544eaa538426b59f7946b5b5b08e54a750a510d0ef38038a501f2725e6e6e4"} Dec 09 09:05:13 crc kubenswrapper[4786]: I1209 09:05:13.152347 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lmgzp" event={"ID":"da3d31ea-e187-4e92-9553-10fcc78ce65c","Type":"ContainerStarted","Data":"289e841a02937e0847197a79715296c228499ce1fe1d94feaae87bcf10f338bc"} Dec 09 09:05:13 crc kubenswrapper[4786]: I1209 09:05:13.154378 4786 generic.go:334] "Generic (PLEG): container finished" podID="a2c6015d-f922-417e-8b64-6d2544fd8d32" containerID="4453fe889bd3b9ce911e0b36145312749b54e269a6f5f7685efbe3ecbafc23f5" exitCode=0 Dec 09 09:05:13 crc kubenswrapper[4786]: I1209 09:05:13.154457 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6af3-account-create-bx46s" event={"ID":"a2c6015d-f922-417e-8b64-6d2544fd8d32","Type":"ContainerDied","Data":"4453fe889bd3b9ce911e0b36145312749b54e269a6f5f7685efbe3ecbafc23f5"} Dec 09 09:05:13 crc kubenswrapper[4786]: I1209 09:05:13.182195 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=18.182173866 podStartE2EDuration="18.182173866s" podCreationTimestamp="2025-12-09 09:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:13.178964507 +0000 UTC m=+1279.062585733" watchObservedRunningTime="2025-12-09 09:05:13.182173866 +0000 UTC m=+1279.065795102" Dec 09 09:05:15 crc kubenswrapper[4786]: I1209 09:05:15.831027 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 09:05:16 crc kubenswrapper[4786]: I1209 09:05:16.944975 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:05:17 crc kubenswrapper[4786]: I1209 09:05:17.470664 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Dec 09 09:05:18 crc kubenswrapper[4786]: I1209 09:05:18.154838 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6af3-account-create-bx46s" Dec 09 09:05:18 crc kubenswrapper[4786]: I1209 09:05:18.231882 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6af3-account-create-bx46s" event={"ID":"a2c6015d-f922-417e-8b64-6d2544fd8d32","Type":"ContainerDied","Data":"c8a365f40c6a4f27f22d98566e95aa51f5968fc6687540db2d7b0e1e8f9a9902"} Dec 09 09:05:18 crc kubenswrapper[4786]: I1209 09:05:18.231928 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8a365f40c6a4f27f22d98566e95aa51f5968fc6687540db2d7b0e1e8f9a9902" Dec 09 09:05:18 crc kubenswrapper[4786]: I1209 09:05:18.231995 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6af3-account-create-bx46s" Dec 09 09:05:18 crc kubenswrapper[4786]: I1209 09:05:18.318142 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj5fc\" (UniqueName: \"kubernetes.io/projected/a2c6015d-f922-417e-8b64-6d2544fd8d32-kube-api-access-mj5fc\") pod \"a2c6015d-f922-417e-8b64-6d2544fd8d32\" (UID: \"a2c6015d-f922-417e-8b64-6d2544fd8d32\") " Dec 09 09:05:18 crc kubenswrapper[4786]: I1209 09:05:18.323081 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2c6015d-f922-417e-8b64-6d2544fd8d32-kube-api-access-mj5fc" (OuterVolumeSpecName: "kube-api-access-mj5fc") pod "a2c6015d-f922-417e-8b64-6d2544fd8d32" (UID: "a2c6015d-f922-417e-8b64-6d2544fd8d32"). InnerVolumeSpecName "kube-api-access-mj5fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:18 crc kubenswrapper[4786]: I1209 09:05:18.421045 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj5fc\" (UniqueName: \"kubernetes.io/projected/a2c6015d-f922-417e-8b64-6d2544fd8d32-kube-api-access-mj5fc\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:18 crc kubenswrapper[4786]: I1209 09:05:18.856721 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.076732 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-775897b69-78frg"] Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.077225 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-775897b69-78frg" podUID="d8cb6939-c479-4114-8191-73a18f5f733a" containerName="dnsmasq-dns" containerID="cri-o://e26fbebb10a95541bdbab40554e40b8d675a5da9c0225ad41a987510a846080d" gracePeriod=10 Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.169689 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-54njt"] Dec 09 09:05:19 crc kubenswrapper[4786]: E1209 09:05:19.170441 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2c6015d-f922-417e-8b64-6d2544fd8d32" containerName="mariadb-account-create" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.170465 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2c6015d-f922-417e-8b64-6d2544fd8d32" containerName="mariadb-account-create" Dec 09 09:05:19 crc kubenswrapper[4786]: E1209 09:05:19.170477 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80996405-8157-449e-b510-652bbe2f3fb7" containerName="mariadb-database-create" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.170486 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="80996405-8157-449e-b510-652bbe2f3fb7" containerName="mariadb-database-create" Dec 09 09:05:19 crc kubenswrapper[4786]: E1209 09:05:19.170522 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19dcdb84-6784-4719-ba48-a8b0ec1f9af2" containerName="mariadb-database-create" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.170532 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="19dcdb84-6784-4719-ba48-a8b0ec1f9af2" containerName="mariadb-database-create" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.170860 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2c6015d-f922-417e-8b64-6d2544fd8d32" containerName="mariadb-account-create" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.170897 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="19dcdb84-6784-4719-ba48-a8b0ec1f9af2" containerName="mariadb-database-create" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.170913 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="80996405-8157-449e-b510-652bbe2f3fb7" containerName="mariadb-database-create" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.172227 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-54njt" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.276055 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-54njt"] Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.331302 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lmgzp" event={"ID":"da3d31ea-e187-4e92-9553-10fcc78ce65c","Type":"ContainerStarted","Data":"e16042c8da5be967c7d8fd868116ec939d2109a19e619a353ba8a6ae2a4d09e8"} Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.362549 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6hv5\" (UniqueName: \"kubernetes.io/projected/816aabda-3e11-440c-89af-3cf36c86c997-kube-api-access-h6hv5\") pod \"glance-db-create-54njt\" (UID: \"816aabda-3e11-440c-89af-3cf36c86c997\") " pod="openstack/glance-db-create-54njt" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.404882 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-ndgjl"] Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.406850 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ndgjl" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.431109 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-n2tp7"] Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.433397 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.441843 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-w4qtx" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.442054 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ndgjl"] Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.442966 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.454416 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-n2tp7"] Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.455612 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-lmgzp" podStartSLOduration=3.052923836 podStartE2EDuration="8.45558108s" podCreationTimestamp="2025-12-09 09:05:11 +0000 UTC" firstStartedPulling="2025-12-09 09:05:12.596862887 +0000 UTC m=+1278.480484113" lastFinishedPulling="2025-12-09 09:05:17.999520131 +0000 UTC m=+1283.883141357" observedRunningTime="2025-12-09 09:05:19.394793105 +0000 UTC m=+1285.278414341" watchObservedRunningTime="2025-12-09 09:05:19.45558108 +0000 UTC m=+1285.339202316" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.466377 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6hv5\" (UniqueName: \"kubernetes.io/projected/816aabda-3e11-440c-89af-3cf36c86c997-kube-api-access-h6hv5\") pod \"glance-db-create-54njt\" (UID: \"816aabda-3e11-440c-89af-3cf36c86c997\") " pod="openstack/glance-db-create-54njt" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.490187 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6hv5\" (UniqueName: \"kubernetes.io/projected/816aabda-3e11-440c-89af-3cf36c86c997-kube-api-access-h6hv5\") pod \"glance-db-create-54njt\" (UID: \"816aabda-3e11-440c-89af-3cf36c86c997\") " pod="openstack/glance-db-create-54njt" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.568771 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-db-sync-config-data\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.568840 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5fb5\" (UniqueName: \"kubernetes.io/projected/67234e4f-66db-4e88-9da8-18841f39d886-kube-api-access-d5fb5\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.569259 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-combined-ca-bundle\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.569335 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-config-data\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.569629 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pq8l7\" (UniqueName: \"kubernetes.io/projected/3cfed5d9-4bc8-4632-80b7-727dc329bcf9-kube-api-access-pq8l7\") pod \"neutron-db-create-ndgjl\" (UID: \"3cfed5d9-4bc8-4632-80b7-727dc329bcf9\") " pod="openstack/neutron-db-create-ndgjl" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.662759 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-54njt" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.671216 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-config-data\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.671329 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pq8l7\" (UniqueName: \"kubernetes.io/projected/3cfed5d9-4bc8-4632-80b7-727dc329bcf9-kube-api-access-pq8l7\") pod \"neutron-db-create-ndgjl\" (UID: \"3cfed5d9-4bc8-4632-80b7-727dc329bcf9\") " pod="openstack/neutron-db-create-ndgjl" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.671410 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-db-sync-config-data\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.671470 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5fb5\" (UniqueName: \"kubernetes.io/projected/67234e4f-66db-4e88-9da8-18841f39d886-kube-api-access-d5fb5\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.671533 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-combined-ca-bundle\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.674906 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-config-data\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.676004 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-db-sync-config-data\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.676247 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-combined-ca-bundle\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.706360 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5fb5\" (UniqueName: \"kubernetes.io/projected/67234e4f-66db-4e88-9da8-18841f39d886-kube-api-access-d5fb5\") pod \"watcher-db-sync-n2tp7\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.721284 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pq8l7\" (UniqueName: \"kubernetes.io/projected/3cfed5d9-4bc8-4632-80b7-727dc329bcf9-kube-api-access-pq8l7\") pod \"neutron-db-create-ndgjl\" (UID: \"3cfed5d9-4bc8-4632-80b7-727dc329bcf9\") " pod="openstack/neutron-db-create-ndgjl" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.736110 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ndgjl" Dec 09 09:05:19 crc kubenswrapper[4786]: I1209 09:05:19.752266 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.352122 4786 generic.go:334] "Generic (PLEG): container finished" podID="d8cb6939-c479-4114-8191-73a18f5f733a" containerID="e26fbebb10a95541bdbab40554e40b8d675a5da9c0225ad41a987510a846080d" exitCode=0 Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.352260 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775897b69-78frg" event={"ID":"d8cb6939-c479-4114-8191-73a18f5f733a","Type":"ContainerDied","Data":"e26fbebb10a95541bdbab40554e40b8d675a5da9c0225ad41a987510a846080d"} Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.459411 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.505836 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89sjq\" (UniqueName: \"kubernetes.io/projected/d8cb6939-c479-4114-8191-73a18f5f733a-kube-api-access-89sjq\") pod \"d8cb6939-c479-4114-8191-73a18f5f733a\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.506399 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-nb\") pod \"d8cb6939-c479-4114-8191-73a18f5f733a\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.506563 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-config\") pod \"d8cb6939-c479-4114-8191-73a18f5f733a\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.506788 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-sb\") pod \"d8cb6939-c479-4114-8191-73a18f5f733a\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.506944 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-dns-svc\") pod \"d8cb6939-c479-4114-8191-73a18f5f733a\" (UID: \"d8cb6939-c479-4114-8191-73a18f5f733a\") " Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.517677 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8cb6939-c479-4114-8191-73a18f5f733a-kube-api-access-89sjq" (OuterVolumeSpecName: "kube-api-access-89sjq") pod "d8cb6939-c479-4114-8191-73a18f5f733a" (UID: "d8cb6939-c479-4114-8191-73a18f5f733a"). InnerVolumeSpecName "kube-api-access-89sjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.582806 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-n2tp7"] Dec 09 09:05:20 crc kubenswrapper[4786]: W1209 09:05:20.633623 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67234e4f_66db_4e88_9da8_18841f39d886.slice/crio-f4f3a988e9917233202bdb44ff56ca874300c28b52bcb1c9c79cb208a4bc827c WatchSource:0}: Error finding container f4f3a988e9917233202bdb44ff56ca874300c28b52bcb1c9c79cb208a4bc827c: Status 404 returned error can't find the container with id f4f3a988e9917233202bdb44ff56ca874300c28b52bcb1c9c79cb208a4bc827c Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.633939 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89sjq\" (UniqueName: \"kubernetes.io/projected/d8cb6939-c479-4114-8191-73a18f5f733a-kube-api-access-89sjq\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.649007 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-ndgjl"] Dec 09 09:05:20 crc kubenswrapper[4786]: W1209 09:05:20.661307 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3cfed5d9_4bc8_4632_80b7_727dc329bcf9.slice/crio-0dfc49727bd336e15e2e961a14fea30b7f55797abc976093530b29d8cb39ab27 WatchSource:0}: Error finding container 0dfc49727bd336e15e2e961a14fea30b7f55797abc976093530b29d8cb39ab27: Status 404 returned error can't find the container with id 0dfc49727bd336e15e2e961a14fea30b7f55797abc976093530b29d8cb39ab27 Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.678990 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-54njt"] Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.685850 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d8cb6939-c479-4114-8191-73a18f5f733a" (UID: "d8cb6939-c479-4114-8191-73a18f5f733a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.688949 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d8cb6939-c479-4114-8191-73a18f5f733a" (UID: "d8cb6939-c479-4114-8191-73a18f5f733a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.691267 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d8cb6939-c479-4114-8191-73a18f5f733a" (UID: "d8cb6939-c479-4114-8191-73a18f5f733a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:20 crc kubenswrapper[4786]: W1209 09:05:20.707981 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod816aabda_3e11_440c_89af_3cf36c86c997.slice/crio-34dc2be93d852d2177ea351201b7701e19f7800883e5c854c6d96707311992d2 WatchSource:0}: Error finding container 34dc2be93d852d2177ea351201b7701e19f7800883e5c854c6d96707311992d2: Status 404 returned error can't find the container with id 34dc2be93d852d2177ea351201b7701e19f7800883e5c854c6d96707311992d2 Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.714996 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-config" (OuterVolumeSpecName: "config") pod "d8cb6939-c479-4114-8191-73a18f5f733a" (UID: "d8cb6939-c479-4114-8191-73a18f5f733a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.736757 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.736828 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.736842 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:20 crc kubenswrapper[4786]: I1209 09:05:20.736857 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8cb6939-c479-4114-8191-73a18f5f733a-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.401642 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-n2tp7" event={"ID":"67234e4f-66db-4e88-9da8-18841f39d886","Type":"ContainerStarted","Data":"f4f3a988e9917233202bdb44ff56ca874300c28b52bcb1c9c79cb208a4bc827c"} Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.416041 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-54njt" event={"ID":"816aabda-3e11-440c-89af-3cf36c86c997","Type":"ContainerStarted","Data":"81d583c8e9403335c9d7cfa2c9f39bed4ec771d58373955f06951c04dbf067bd"} Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.416126 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-54njt" event={"ID":"816aabda-3e11-440c-89af-3cf36c86c997","Type":"ContainerStarted","Data":"34dc2be93d852d2177ea351201b7701e19f7800883e5c854c6d96707311992d2"} Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.420049 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-775897b69-78frg" event={"ID":"d8cb6939-c479-4114-8191-73a18f5f733a","Type":"ContainerDied","Data":"655a668f0c6f33cc80dc67dabd87896abe456859cce7ecdf883b2a65c6dfb28a"} Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.420108 4786 scope.go:117] "RemoveContainer" containerID="e26fbebb10a95541bdbab40554e40b8d675a5da9c0225ad41a987510a846080d" Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.420328 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-775897b69-78frg" Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.440865 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ndgjl" event={"ID":"3cfed5d9-4bc8-4632-80b7-727dc329bcf9","Type":"ContainerStarted","Data":"eecf2db073ed9371ff64d7ba74f2ca4c48472488f89d85c16888bd817638af7c"} Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.441116 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ndgjl" event={"ID":"3cfed5d9-4bc8-4632-80b7-727dc329bcf9","Type":"ContainerStarted","Data":"0dfc49727bd336e15e2e961a14fea30b7f55797abc976093530b29d8cb39ab27"} Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.459135 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-54njt" podStartSLOduration=2.45909532 podStartE2EDuration="2.45909532s" podCreationTimestamp="2025-12-09 09:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:21.442566461 +0000 UTC m=+1287.326187707" watchObservedRunningTime="2025-12-09 09:05:21.45909532 +0000 UTC m=+1287.342716556" Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.481797 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-775897b69-78frg"] Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.505248 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-775897b69-78frg"] Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.533361 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-ndgjl" podStartSLOduration=2.533336657 podStartE2EDuration="2.533336657s" podCreationTimestamp="2025-12-09 09:05:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:21.519901774 +0000 UTC m=+1287.403523000" watchObservedRunningTime="2025-12-09 09:05:21.533336657 +0000 UTC m=+1287.416957883" Dec 09 09:05:21 crc kubenswrapper[4786]: I1209 09:05:21.550805 4786 scope.go:117] "RemoveContainer" containerID="327e0cc65b68f51849a67437515017e4928964338e1b7d8a12c90a865d56d73b" Dec 09 09:05:22 crc kubenswrapper[4786]: I1209 09:05:22.455519 4786 generic.go:334] "Generic (PLEG): container finished" podID="3cfed5d9-4bc8-4632-80b7-727dc329bcf9" containerID="eecf2db073ed9371ff64d7ba74f2ca4c48472488f89d85c16888bd817638af7c" exitCode=0 Dec 09 09:05:22 crc kubenswrapper[4786]: I1209 09:05:22.455619 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ndgjl" event={"ID":"3cfed5d9-4bc8-4632-80b7-727dc329bcf9","Type":"ContainerDied","Data":"eecf2db073ed9371ff64d7ba74f2ca4c48472488f89d85c16888bd817638af7c"} Dec 09 09:05:22 crc kubenswrapper[4786]: I1209 09:05:22.458072 4786 generic.go:334] "Generic (PLEG): container finished" podID="816aabda-3e11-440c-89af-3cf36c86c997" containerID="81d583c8e9403335c9d7cfa2c9f39bed4ec771d58373955f06951c04dbf067bd" exitCode=0 Dec 09 09:05:22 crc kubenswrapper[4786]: I1209 09:05:22.458128 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-54njt" event={"ID":"816aabda-3e11-440c-89af-3cf36c86c997","Type":"ContainerDied","Data":"81d583c8e9403335c9d7cfa2c9f39bed4ec771d58373955f06951c04dbf067bd"} Dec 09 09:05:23 crc kubenswrapper[4786]: I1209 09:05:23.209397 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8cb6939-c479-4114-8191-73a18f5f733a" path="/var/lib/kubelet/pods/d8cb6939-c479-4114-8191-73a18f5f733a/volumes" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.155408 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-54njt" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.164415 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ndgjl" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.278857 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pq8l7\" (UniqueName: \"kubernetes.io/projected/3cfed5d9-4bc8-4632-80b7-727dc329bcf9-kube-api-access-pq8l7\") pod \"3cfed5d9-4bc8-4632-80b7-727dc329bcf9\" (UID: \"3cfed5d9-4bc8-4632-80b7-727dc329bcf9\") " Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.278942 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6hv5\" (UniqueName: \"kubernetes.io/projected/816aabda-3e11-440c-89af-3cf36c86c997-kube-api-access-h6hv5\") pod \"816aabda-3e11-440c-89af-3cf36c86c997\" (UID: \"816aabda-3e11-440c-89af-3cf36c86c997\") " Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.291066 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cfed5d9-4bc8-4632-80b7-727dc329bcf9-kube-api-access-pq8l7" (OuterVolumeSpecName: "kube-api-access-pq8l7") pod "3cfed5d9-4bc8-4632-80b7-727dc329bcf9" (UID: "3cfed5d9-4bc8-4632-80b7-727dc329bcf9"). InnerVolumeSpecName "kube-api-access-pq8l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.300083 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/816aabda-3e11-440c-89af-3cf36c86c997-kube-api-access-h6hv5" (OuterVolumeSpecName: "kube-api-access-h6hv5") pod "816aabda-3e11-440c-89af-3cf36c86c997" (UID: "816aabda-3e11-440c-89af-3cf36c86c997"). InnerVolumeSpecName "kube-api-access-h6hv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.382118 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pq8l7\" (UniqueName: \"kubernetes.io/projected/3cfed5d9-4bc8-4632-80b7-727dc329bcf9-kube-api-access-pq8l7\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.382164 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6hv5\" (UniqueName: \"kubernetes.io/projected/816aabda-3e11-440c-89af-3cf36c86c997-kube-api-access-h6hv5\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.491464 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-54njt" event={"ID":"816aabda-3e11-440c-89af-3cf36c86c997","Type":"ContainerDied","Data":"34dc2be93d852d2177ea351201b7701e19f7800883e5c854c6d96707311992d2"} Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.491523 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34dc2be93d852d2177ea351201b7701e19f7800883e5c854c6d96707311992d2" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.491550 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-54njt" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.494695 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-ndgjl" Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.495601 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-ndgjl" event={"ID":"3cfed5d9-4bc8-4632-80b7-727dc329bcf9","Type":"ContainerDied","Data":"0dfc49727bd336e15e2e961a14fea30b7f55797abc976093530b29d8cb39ab27"} Dec 09 09:05:24 crc kubenswrapper[4786]: I1209 09:05:24.495653 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dfc49727bd336e15e2e961a14fea30b7f55797abc976093530b29d8cb39ab27" Dec 09 09:05:25 crc kubenswrapper[4786]: I1209 09:05:25.831243 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 09:05:25 crc kubenswrapper[4786]: I1209 09:05:25.837819 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 09:05:26 crc kubenswrapper[4786]: I1209 09:05:26.533599 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.554477 4786 generic.go:334] "Generic (PLEG): container finished" podID="da3d31ea-e187-4e92-9553-10fcc78ce65c" containerID="e16042c8da5be967c7d8fd868116ec939d2109a19e619a353ba8a6ae2a4d09e8" exitCode=0 Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.916887 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lmgzp" event={"ID":"da3d31ea-e187-4e92-9553-10fcc78ce65c","Type":"ContainerDied","Data":"e16042c8da5be967c7d8fd868116ec939d2109a19e619a353ba8a6ae2a4d09e8"} Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.917238 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-b672-account-create-lpjbb"] Dec 09 09:05:27 crc kubenswrapper[4786]: E1209 09:05:27.917694 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="816aabda-3e11-440c-89af-3cf36c86c997" containerName="mariadb-database-create" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.917716 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="816aabda-3e11-440c-89af-3cf36c86c997" containerName="mariadb-database-create" Dec 09 09:05:27 crc kubenswrapper[4786]: E1209 09:05:27.917747 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cb6939-c479-4114-8191-73a18f5f733a" containerName="init" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.917760 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cb6939-c479-4114-8191-73a18f5f733a" containerName="init" Dec 09 09:05:27 crc kubenswrapper[4786]: E1209 09:05:27.917770 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cfed5d9-4bc8-4632-80b7-727dc329bcf9" containerName="mariadb-database-create" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.917778 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cfed5d9-4bc8-4632-80b7-727dc329bcf9" containerName="mariadb-database-create" Dec 09 09:05:27 crc kubenswrapper[4786]: E1209 09:05:27.917800 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8cb6939-c479-4114-8191-73a18f5f733a" containerName="dnsmasq-dns" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.917810 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8cb6939-c479-4114-8191-73a18f5f733a" containerName="dnsmasq-dns" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.918050 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cfed5d9-4bc8-4632-80b7-727dc329bcf9" containerName="mariadb-database-create" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.918076 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="816aabda-3e11-440c-89af-3cf36c86c997" containerName="mariadb-database-create" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.918094 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8cb6939-c479-4114-8191-73a18f5f733a" containerName="dnsmasq-dns" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.918797 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b672-account-create-lpjbb"] Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.918823 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7011-account-create-8bwd5"] Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.919141 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b672-account-create-lpjbb" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.919798 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7011-account-create-8bwd5"] Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.919862 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7011-account-create-8bwd5" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.925778 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 09 09:05:27 crc kubenswrapper[4786]: I1209 09:05:27.925778 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 09 09:05:28 crc kubenswrapper[4786]: I1209 09:05:28.023115 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kgc2\" (UniqueName: \"kubernetes.io/projected/9e6e672a-2488-41e8-8236-982d18b86886-kube-api-access-8kgc2\") pod \"barbican-7011-account-create-8bwd5\" (UID: \"9e6e672a-2488-41e8-8236-982d18b86886\") " pod="openstack/barbican-7011-account-create-8bwd5" Dec 09 09:05:28 crc kubenswrapper[4786]: I1209 09:05:28.023237 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd6zq\" (UniqueName: \"kubernetes.io/projected/40435517-5196-4dbf-bb22-534c3fcb374f-kube-api-access-rd6zq\") pod \"cinder-b672-account-create-lpjbb\" (UID: \"40435517-5196-4dbf-bb22-534c3fcb374f\") " pod="openstack/cinder-b672-account-create-lpjbb" Dec 09 09:05:28 crc kubenswrapper[4786]: I1209 09:05:28.124986 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rd6zq\" (UniqueName: \"kubernetes.io/projected/40435517-5196-4dbf-bb22-534c3fcb374f-kube-api-access-rd6zq\") pod \"cinder-b672-account-create-lpjbb\" (UID: \"40435517-5196-4dbf-bb22-534c3fcb374f\") " pod="openstack/cinder-b672-account-create-lpjbb" Dec 09 09:05:28 crc kubenswrapper[4786]: I1209 09:05:28.125222 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kgc2\" (UniqueName: \"kubernetes.io/projected/9e6e672a-2488-41e8-8236-982d18b86886-kube-api-access-8kgc2\") pod \"barbican-7011-account-create-8bwd5\" (UID: \"9e6e672a-2488-41e8-8236-982d18b86886\") " pod="openstack/barbican-7011-account-create-8bwd5" Dec 09 09:05:28 crc kubenswrapper[4786]: I1209 09:05:28.157232 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kgc2\" (UniqueName: \"kubernetes.io/projected/9e6e672a-2488-41e8-8236-982d18b86886-kube-api-access-8kgc2\") pod \"barbican-7011-account-create-8bwd5\" (UID: \"9e6e672a-2488-41e8-8236-982d18b86886\") " pod="openstack/barbican-7011-account-create-8bwd5" Dec 09 09:05:28 crc kubenswrapper[4786]: I1209 09:05:28.168611 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd6zq\" (UniqueName: \"kubernetes.io/projected/40435517-5196-4dbf-bb22-534c3fcb374f-kube-api-access-rd6zq\") pod \"cinder-b672-account-create-lpjbb\" (UID: \"40435517-5196-4dbf-bb22-534c3fcb374f\") " pod="openstack/cinder-b672-account-create-lpjbb" Dec 09 09:05:28 crc kubenswrapper[4786]: I1209 09:05:28.251063 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b672-account-create-lpjbb" Dec 09 09:05:28 crc kubenswrapper[4786]: I1209 09:05:28.280848 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7011-account-create-8bwd5" Dec 09 09:05:29 crc kubenswrapper[4786]: I1209 09:05:29.310778 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-f3f8-account-create-9pmhn"] Dec 09 09:05:29 crc kubenswrapper[4786]: I1209 09:05:29.322412 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f3f8-account-create-9pmhn"] Dec 09 09:05:29 crc kubenswrapper[4786]: I1209 09:05:29.322581 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f3f8-account-create-9pmhn" Dec 09 09:05:29 crc kubenswrapper[4786]: I1209 09:05:29.326240 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 09 09:05:29 crc kubenswrapper[4786]: I1209 09:05:29.388893 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsz46\" (UniqueName: \"kubernetes.io/projected/9b61e55b-a701-42f7-b2c4-de979b971c9b-kube-api-access-hsz46\") pod \"glance-f3f8-account-create-9pmhn\" (UID: \"9b61e55b-a701-42f7-b2c4-de979b971c9b\") " pod="openstack/glance-f3f8-account-create-9pmhn" Dec 09 09:05:29 crc kubenswrapper[4786]: I1209 09:05:29.492192 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsz46\" (UniqueName: \"kubernetes.io/projected/9b61e55b-a701-42f7-b2c4-de979b971c9b-kube-api-access-hsz46\") pod \"glance-f3f8-account-create-9pmhn\" (UID: \"9b61e55b-a701-42f7-b2c4-de979b971c9b\") " pod="openstack/glance-f3f8-account-create-9pmhn" Dec 09 09:05:29 crc kubenswrapper[4786]: I1209 09:05:29.516886 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsz46\" (UniqueName: \"kubernetes.io/projected/9b61e55b-a701-42f7-b2c4-de979b971c9b-kube-api-access-hsz46\") pod \"glance-f3f8-account-create-9pmhn\" (UID: \"9b61e55b-a701-42f7-b2c4-de979b971c9b\") " pod="openstack/glance-f3f8-account-create-9pmhn" Dec 09 09:05:29 crc kubenswrapper[4786]: I1209 09:05:29.651356 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f3f8-account-create-9pmhn" Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.224033 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.366983 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qm69z\" (UniqueName: \"kubernetes.io/projected/da3d31ea-e187-4e92-9553-10fcc78ce65c-kube-api-access-qm69z\") pod \"da3d31ea-e187-4e92-9553-10fcc78ce65c\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.367268 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-config-data\") pod \"da3d31ea-e187-4e92-9553-10fcc78ce65c\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.367323 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-combined-ca-bundle\") pod \"da3d31ea-e187-4e92-9553-10fcc78ce65c\" (UID: \"da3d31ea-e187-4e92-9553-10fcc78ce65c\") " Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.400702 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da3d31ea-e187-4e92-9553-10fcc78ce65c-kube-api-access-qm69z" (OuterVolumeSpecName: "kube-api-access-qm69z") pod "da3d31ea-e187-4e92-9553-10fcc78ce65c" (UID: "da3d31ea-e187-4e92-9553-10fcc78ce65c"). InnerVolumeSpecName "kube-api-access-qm69z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.416823 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "da3d31ea-e187-4e92-9553-10fcc78ce65c" (UID: "da3d31ea-e187-4e92-9553-10fcc78ce65c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.462794 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-config-data" (OuterVolumeSpecName: "config-data") pod "da3d31ea-e187-4e92-9553-10fcc78ce65c" (UID: "da3d31ea-e187-4e92-9553-10fcc78ce65c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.469624 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.469680 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/da3d31ea-e187-4e92-9553-10fcc78ce65c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.469696 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qm69z\" (UniqueName: \"kubernetes.io/projected/da3d31ea-e187-4e92-9553-10fcc78ce65c-kube-api-access-qm69z\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.610568 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-lmgzp" event={"ID":"da3d31ea-e187-4e92-9553-10fcc78ce65c","Type":"ContainerDied","Data":"289e841a02937e0847197a79715296c228499ce1fe1d94feaae87bcf10f338bc"} Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.610644 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="289e841a02937e0847197a79715296c228499ce1fe1d94feaae87bcf10f338bc" Dec 09 09:05:32 crc kubenswrapper[4786]: I1209 09:05:32.610760 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-lmgzp" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.307477 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-b672-account-create-lpjbb"] Dec 09 09:05:33 crc kubenswrapper[4786]: W1209 09:05:33.314708 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40435517_5196_4dbf_bb22_534c3fcb374f.slice/crio-c32f2f1bf24dfa399eea274501493aa5cffc8ffae16361f8fcdd6b421f59dae0 WatchSource:0}: Error finding container c32f2f1bf24dfa399eea274501493aa5cffc8ffae16361f8fcdd6b421f59dae0: Status 404 returned error can't find the container with id c32f2f1bf24dfa399eea274501493aa5cffc8ffae16361f8fcdd6b421f59dae0 Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.405014 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-f3f8-account-create-9pmhn"] Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.413231 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7011-account-create-8bwd5"] Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.550466 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-4pnnk"] Dec 09 09:05:33 crc kubenswrapper[4786]: E1209 09:05:33.551256 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da3d31ea-e187-4e92-9553-10fcc78ce65c" containerName="keystone-db-sync" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.551276 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="da3d31ea-e187-4e92-9553-10fcc78ce65c" containerName="keystone-db-sync" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.552505 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="da3d31ea-e187-4e92-9553-10fcc78ce65c" containerName="keystone-db-sync" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.553460 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.561787 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.569890 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrhs7" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.570505 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.570878 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.576452 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5889ff575f-b8fpg"] Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.586270 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.606393 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4pnnk"] Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.619836 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5889ff575f-b8fpg"] Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.665904 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f3f8-account-create-9pmhn" event={"ID":"9b61e55b-a701-42f7-b2c4-de979b971c9b","Type":"ContainerStarted","Data":"6d2a65c148505e3e51e18a857e02351c606e25b51565ba45aa558b485da36b56"} Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.677900 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7011-account-create-8bwd5" event={"ID":"9e6e672a-2488-41e8-8236-982d18b86886","Type":"ContainerStarted","Data":"171bde70333f72250cb71028d7a1983563046537158831fe2fe2f34045961802"} Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.689583 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b672-account-create-lpjbb" event={"ID":"40435517-5196-4dbf-bb22-534c3fcb374f","Type":"ContainerStarted","Data":"c32f2f1bf24dfa399eea274501493aa5cffc8ffae16361f8fcdd6b421f59dae0"} Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.715815 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-config-data\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.715894 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.715923 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-fernet-keys\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.715940 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-svc\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.715981 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-config\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.716021 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjc9q\" (UniqueName: \"kubernetes.io/projected/40cac51c-95d7-4eb4-8445-f83c35202ed4-kube-api-access-tjc9q\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.716040 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-scripts\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.716072 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-credential-keys\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.716114 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktsxd\" (UniqueName: \"kubernetes.io/projected/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-kube-api-access-ktsxd\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.716142 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.716166 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.716192 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-combined-ca-bundle\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.794750 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-694bf999df-qrh2d"] Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.801888 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.808933 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.808955 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.809090 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.817329 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5rxlt" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.818954 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-logs\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819013 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819057 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819087 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-combined-ca-bundle\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819131 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-config-data\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819159 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-config-data\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819226 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819278 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-scripts\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819306 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-fernet-keys\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819328 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-svc\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819384 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-config\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819411 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-horizon-secret-key\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819469 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjc9q\" (UniqueName: \"kubernetes.io/projected/40cac51c-95d7-4eb4-8445-f83c35202ed4-kube-api-access-tjc9q\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819492 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhq6q\" (UniqueName: \"kubernetes.io/projected/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-kube-api-access-hhq6q\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819514 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-scripts\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819552 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-credential-keys\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.819576 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktsxd\" (UniqueName: \"kubernetes.io/projected/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-kube-api-access-ktsxd\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.820183 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-sb\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.820886 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-nb\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.821447 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-svc\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.832237 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-swift-storage-0\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.832818 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-config\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.839543 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-694bf999df-qrh2d"] Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.844257 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-config-data\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.849263 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-combined-ca-bundle\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.849735 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-fernet-keys\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.850238 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-scripts\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.857860 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktsxd\" (UniqueName: \"kubernetes.io/projected/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-kube-api-access-ktsxd\") pod \"dnsmasq-dns-5889ff575f-b8fpg\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.858759 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-credential-keys\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.870790 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjc9q\" (UniqueName: \"kubernetes.io/projected/40cac51c-95d7-4eb4-8445-f83c35202ed4-kube-api-access-tjc9q\") pod \"keystone-bootstrap-4pnnk\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.935008 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-config-data\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.935083 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-scripts\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.935146 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-horizon-secret-key\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.935181 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhq6q\" (UniqueName: \"kubernetes.io/projected/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-kube-api-access-hhq6q\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.935228 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-logs\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.936056 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-logs\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.937118 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-config-data\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.945602 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.946284 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.950937 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-horizon-secret-key\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.953096 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-scripts\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.968554 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.971449 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.979037 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 09:05:33 crc kubenswrapper[4786]: I1209 09:05:33.980056 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.010547 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhq6q\" (UniqueName: \"kubernetes.io/projected/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-kube-api-access-hhq6q\") pod \"horizon-694bf999df-qrh2d\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.042943 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.080168 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-f4489b979-86h4q"] Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.082308 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.148972 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-config-data\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.149083 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-log-httpd\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.149114 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.149172 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-scripts\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.149204 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.149239 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-run-httpd\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.149283 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlbfj\" (UniqueName: \"kubernetes.io/projected/f61efc97-8443-4381-85d3-7b6dd9e5b132-kube-api-access-mlbfj\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.159182 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.168818 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f4489b979-86h4q"] Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.225479 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-w9zct"] Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.235314 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.240200 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.240253 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z9zbd" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.240406 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.246798 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5889ff575f-b8fpg"] Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.250839 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-scripts\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.250902 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-config-data\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.250936 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lhd2\" (UniqueName: \"kubernetes.io/projected/94440ba5-1d12-4f3c-882c-2a648526482b-kube-api-access-7lhd2\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.250971 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-config-data\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.251174 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.251247 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94440ba5-1d12-4f3c-882c-2a648526482b-logs\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.251299 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-log-httpd\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.251407 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94440ba5-1d12-4f3c-882c-2a648526482b-horizon-secret-key\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.251583 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-scripts\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.251636 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.251699 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-run-httpd\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.251761 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlbfj\" (UniqueName: \"kubernetes.io/projected/f61efc97-8443-4381-85d3-7b6dd9e5b132-kube-api-access-mlbfj\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.252142 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-log-httpd\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.252916 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-run-httpd\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.262559 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w9zct"] Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.262973 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-scripts\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.263142 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.263867 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.266522 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-config-data\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.294193 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlbfj\" (UniqueName: \"kubernetes.io/projected/f61efc97-8443-4381-85d3-7b6dd9e5b132-kube-api-access-mlbfj\") pod \"ceilometer-0\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.312528 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5db5d4bd9f-bjbhg"] Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.317813 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.329962 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5db5d4bd9f-bjbhg"] Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.343193 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.378125 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss7ch\" (UniqueName: \"kubernetes.io/projected/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-kube-api-access-ss7ch\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.378225 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94440ba5-1d12-4f3c-882c-2a648526482b-logs\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.378344 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94440ba5-1d12-4f3c-882c-2a648526482b-horizon-secret-key\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.378465 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-logs\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.378877 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-scripts\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.378971 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-config-data\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.379064 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-combined-ca-bundle\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.379132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-scripts\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.379225 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-config-data\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.379303 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lhd2\" (UniqueName: \"kubernetes.io/projected/94440ba5-1d12-4f3c-882c-2a648526482b-kube-api-access-7lhd2\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.384467 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-scripts\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.402763 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94440ba5-1d12-4f3c-882c-2a648526482b-logs\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.403686 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-config-data\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.407939 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94440ba5-1d12-4f3c-882c-2a648526482b-horizon-secret-key\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.446912 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lhd2\" (UniqueName: \"kubernetes.io/projected/94440ba5-1d12-4f3c-882c-2a648526482b-kube-api-access-7lhd2\") pod \"horizon-f4489b979-86h4q\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.484828 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-config\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.484915 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-swift-storage-0\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.484941 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c57gz\" (UniqueName: \"kubernetes.io/projected/7a3dff40-beee-4963-881e-647d277fcd7d-kube-api-access-c57gz\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.484971 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-scripts\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.484988 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-svc\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.485011 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-config-data\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.485029 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-combined-ca-bundle\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.485063 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-nb\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.485113 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-sb\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.485135 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss7ch\" (UniqueName: \"kubernetes.io/projected/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-kube-api-access-ss7ch\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.485180 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-logs\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.485760 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-logs\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.495537 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-scripts\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.496733 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-combined-ca-bundle\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.502396 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-config-data\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.508533 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss7ch\" (UniqueName: \"kubernetes.io/projected/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-kube-api-access-ss7ch\") pod \"placement-db-sync-w9zct\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.519749 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.586709 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-config\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.586800 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-swift-storage-0\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.586830 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c57gz\" (UniqueName: \"kubernetes.io/projected/7a3dff40-beee-4963-881e-647d277fcd7d-kube-api-access-c57gz\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.586870 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-svc\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.586926 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-nb\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.587370 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-sb\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.589468 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-svc\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.589790 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-nb\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.590015 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-swift-storage-0\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.590217 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-config\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.590695 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-sb\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.612339 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c57gz\" (UniqueName: \"kubernetes.io/projected/7a3dff40-beee-4963-881e-647d277fcd7d-kube-api-access-c57gz\") pod \"dnsmasq-dns-5db5d4bd9f-bjbhg\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.782330 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b672-account-create-lpjbb" event={"ID":"40435517-5196-4dbf-bb22-534c3fcb374f","Type":"ContainerStarted","Data":"641bcab2263e2e18181f26b2f7370a0c53ff5f0cfc9aedb042eec7b63d9ba76b"} Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.798057 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-n2tp7" event={"ID":"67234e4f-66db-4e88-9da8-18841f39d886","Type":"ContainerStarted","Data":"2b7cd80ea41defb641d14baac19fb4e26904cbb3618d1e8606e8f6210b4843c2"} Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.800935 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9zct" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.817578 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-7011-account-create-8bwd5" podStartSLOduration=7.817527601 podStartE2EDuration="7.817527601s" podCreationTimestamp="2025-12-09 09:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:34.784224618 +0000 UTC m=+1300.667845864" watchObservedRunningTime="2025-12-09 09:05:34.817527601 +0000 UTC m=+1300.701148827" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.824077 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.847538 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5889ff575f-b8fpg"] Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.847979 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-b672-account-create-lpjbb" podStartSLOduration=7.847950909 podStartE2EDuration="7.847950909s" podCreationTimestamp="2025-12-09 09:05:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:34.81792191 +0000 UTC m=+1300.701543156" watchObservedRunningTime="2025-12-09 09:05:34.847950909 +0000 UTC m=+1300.731572135" Dec 09 09:05:34 crc kubenswrapper[4786]: W1209 09:05:34.852945 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8e0a6f1_1d12_4f4c_b4d8_4223008772e2.slice/crio-e63fdcf23ec431666e1ed58ee4cc41c27be7fe3efe6f68e5ad6e699c8115788d WatchSource:0}: Error finding container e63fdcf23ec431666e1ed58ee4cc41c27be7fe3efe6f68e5ad6e699c8115788d: Status 404 returned error can't find the container with id e63fdcf23ec431666e1ed58ee4cc41c27be7fe3efe6f68e5ad6e699c8115788d Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.890462 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-f3f8-account-create-9pmhn" podStartSLOduration=5.890441087 podStartE2EDuration="5.890441087s" podCreationTimestamp="2025-12-09 09:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:34.837514907 +0000 UTC m=+1300.721136133" watchObservedRunningTime="2025-12-09 09:05:34.890441087 +0000 UTC m=+1300.774062313" Dec 09 09:05:34 crc kubenswrapper[4786]: I1209 09:05:34.894478 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-n2tp7" podStartSLOduration=3.711673777 podStartE2EDuration="15.894466626s" podCreationTimestamp="2025-12-09 09:05:19 +0000 UTC" firstStartedPulling="2025-12-09 09:05:20.650387695 +0000 UTC m=+1286.534008921" lastFinishedPulling="2025-12-09 09:05:32.833180534 +0000 UTC m=+1298.716801770" observedRunningTime="2025-12-09 09:05:34.867381062 +0000 UTC m=+1300.751002288" watchObservedRunningTime="2025-12-09 09:05:34.894466626 +0000 UTC m=+1300.778087842" Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.008136 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-694bf999df-qrh2d"] Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.126775 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.261508 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-4pnnk"] Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.283077 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-f4489b979-86h4q"] Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.308659 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.793805 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-w9zct"] Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.819006 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5db5d4bd9f-bjbhg"] Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.868534 4786 generic.go:334] "Generic (PLEG): container finished" podID="9b61e55b-a701-42f7-b2c4-de979b971c9b" containerID="518343404b352c42c8f6c73f07f9cda18ab3af20c1b383b8d0163c5c31c4ae76" exitCode=0 Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.870001 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f3f8-account-create-9pmhn" event={"ID":"9b61e55b-a701-42f7-b2c4-de979b971c9b","Type":"ContainerDied","Data":"518343404b352c42c8f6c73f07f9cda18ab3af20c1b383b8d0163c5c31c4ae76"} Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.887347 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" event={"ID":"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2","Type":"ContainerStarted","Data":"e63fdcf23ec431666e1ed58ee4cc41c27be7fe3efe6f68e5ad6e699c8115788d"} Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.887704 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" podUID="a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" containerName="init" containerID="cri-o://b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639" gracePeriod=10 Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.915068 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-694bf999df-qrh2d" event={"ID":"8985b7e0-0907-4cf8-ba9d-f75c5ff668da","Type":"ContainerStarted","Data":"42e4f330e24a09c339b62a9dba5b0d4ee6bea7ef044103adaea047d9b855ca65"} Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.919566 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4pnnk" event={"ID":"40cac51c-95d7-4eb4-8445-f83c35202ed4","Type":"ContainerStarted","Data":"42a849523250efd9cef1714b6b62acb6347d2759c576d6eec22d33dfbea0d0f5"} Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.932683 4786 generic.go:334] "Generic (PLEG): container finished" podID="9e6e672a-2488-41e8-8236-982d18b86886" containerID="822247343e7df46c3f3dff718c838cb0c3fc2f3a020c137eab61f6b66d087e63" exitCode=0 Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.932876 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7011-account-create-8bwd5" event={"ID":"9e6e672a-2488-41e8-8236-982d18b86886","Type":"ContainerDied","Data":"822247343e7df46c3f3dff718c838cb0c3fc2f3a020c137eab61f6b66d087e63"} Dec 09 09:05:35 crc kubenswrapper[4786]: I1209 09:05:35.948462 4786 generic.go:334] "Generic (PLEG): container finished" podID="40435517-5196-4dbf-bb22-534c3fcb374f" containerID="641bcab2263e2e18181f26b2f7370a0c53ff5f0cfc9aedb042eec7b63d9ba76b" exitCode=0 Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.000545 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b672-account-create-lpjbb" event={"ID":"40435517-5196-4dbf-bb22-534c3fcb374f","Type":"ContainerDied","Data":"641bcab2263e2e18181f26b2f7370a0c53ff5f0cfc9aedb042eec7b63d9ba76b"} Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.000665 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f4489b979-86h4q" event={"ID":"94440ba5-1d12-4f3c-882c-2a648526482b","Type":"ContainerStarted","Data":"caec82397aa4663ca4990b3d340401d56bb902387122efb6d5c800acec74c017"} Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.000683 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" event={"ID":"7a3dff40-beee-4963-881e-647d277fcd7d","Type":"ContainerStarted","Data":"84fdab36e00dcdeada6688183983a930ea4077c71fd0e2e040db60225ededc99"} Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.000706 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f61efc97-8443-4381-85d3-7b6dd9e5b132","Type":"ContainerStarted","Data":"4aed4afefa764993ea70db03b2c400a45b1bda82bc035278f60d7644e864d334"} Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.032700 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-4pnnk" podStartSLOduration=3.032669652 podStartE2EDuration="3.032669652s" podCreationTimestamp="2025-12-09 09:05:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:35.947106943 +0000 UTC m=+1301.830728169" watchObservedRunningTime="2025-12-09 09:05:36.032669652 +0000 UTC m=+1301.916290878" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.401608 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-694bf999df-qrh2d"] Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.416223 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5cc88d847f-95rbt"] Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.420464 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.450257 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cc88d847f-95rbt"] Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.469717 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-config-data\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.469899 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20f84ee9-c044-45eb-830c-94578b1af666-logs\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.470032 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20f84ee9-c044-45eb-830c-94578b1af666-horizon-secret-key\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.470066 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9l8h\" (UniqueName: \"kubernetes.io/projected/20f84ee9-c044-45eb-830c-94578b1af666-kube-api-access-j9l8h\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.470195 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-scripts\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.571727 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-config-data\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.571792 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20f84ee9-c044-45eb-830c-94578b1af666-logs\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.571838 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20f84ee9-c044-45eb-830c-94578b1af666-horizon-secret-key\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.571858 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9l8h\" (UniqueName: \"kubernetes.io/projected/20f84ee9-c044-45eb-830c-94578b1af666-kube-api-access-j9l8h\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.571914 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-scripts\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.572205 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20f84ee9-c044-45eb-830c-94578b1af666-logs\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.572565 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-scripts\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.595221 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20f84ee9-c044-45eb-830c-94578b1af666-horizon-secret-key\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.595957 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-config-data\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.708345 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.769710 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9l8h\" (UniqueName: \"kubernetes.io/projected/20f84ee9-c044-45eb-830c-94578b1af666-kube-api-access-j9l8h\") pod \"horizon-5cc88d847f-95rbt\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.803632 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:36 crc kubenswrapper[4786]: I1209 09:05:36.945067 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.021162 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-nb\") pod \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.021224 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktsxd\" (UniqueName: \"kubernetes.io/projected/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-kube-api-access-ktsxd\") pod \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.021291 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-swift-storage-0\") pod \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.021365 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-sb\") pod \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.021447 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-config\") pod \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.021589 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-svc\") pod \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\" (UID: \"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2\") " Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.047949 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4pnnk" event={"ID":"40cac51c-95d7-4eb4-8445-f83c35202ed4","Type":"ContainerStarted","Data":"00d414414fa82fdc80f7f1deb244a6a0c2ec6c083db69e22bceffe4abaf9dab2"} Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.086754 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" (UID: "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.088009 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9zct" event={"ID":"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61","Type":"ContainerStarted","Data":"4764cd5f08dffb192bdb35ea256c3203fd2e65c78ac7ac0b896d5dda640ee0bb"} Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.112665 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-kube-api-access-ktsxd" (OuterVolumeSpecName: "kube-api-access-ktsxd") pod "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" (UID: "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2"). InnerVolumeSpecName "kube-api-access-ktsxd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.118221 4786 generic.go:334] "Generic (PLEG): container finished" podID="7a3dff40-beee-4963-881e-647d277fcd7d" containerID="74b34b47961f75c9ec53cd229e881c0c45f9755755232504c7033bdafeb11f3e" exitCode=0 Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.118230 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" (UID: "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.119220 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" event={"ID":"7a3dff40-beee-4963-881e-647d277fcd7d","Type":"ContainerDied","Data":"74b34b47961f75c9ec53cd229e881c0c45f9755755232504c7033bdafeb11f3e"} Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.123872 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" (UID: "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.126611 4786 generic.go:334] "Generic (PLEG): container finished" podID="a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" containerID="b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639" exitCode=0 Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.126979 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.127240 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" event={"ID":"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2","Type":"ContainerDied","Data":"b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639"} Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.127284 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5889ff575f-b8fpg" event={"ID":"a8e0a6f1-1d12-4f4c-b4d8-4223008772e2","Type":"ContainerDied","Data":"e63fdcf23ec431666e1ed58ee4cc41c27be7fe3efe6f68e5ad6e699c8115788d"} Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.127308 4786 scope.go:117] "RemoveContainer" containerID="b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.127695 4786 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.128232 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.128257 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.128276 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktsxd\" (UniqueName: \"kubernetes.io/projected/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-kube-api-access-ktsxd\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.128342 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" (UID: "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.151934 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-config" (OuterVolumeSpecName: "config") pod "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" (UID: "a8e0a6f1-1d12-4f4c-b4d8-4223008772e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.231223 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.231275 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:37 crc kubenswrapper[4786]: E1209 09:05:37.369974 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8e0a6f1_1d12_4f4c_b4d8_4223008772e2.slice\": RecentStats: unable to find data in memory cache]" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.395908 4786 scope.go:117] "RemoveContainer" containerID="b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639" Dec 09 09:05:37 crc kubenswrapper[4786]: E1209 09:05:37.404488 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639\": container with ID starting with b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639 not found: ID does not exist" containerID="b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.404567 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639"} err="failed to get container status \"b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639\": rpc error: code = NotFound desc = could not find container \"b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639\": container with ID starting with b3c1db61d6ec3731d26b6ab113ba88602e6c305904cd61ccd6e173e4a08fc639 not found: ID does not exist" Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.585581 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5889ff575f-b8fpg"] Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.598333 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5889ff575f-b8fpg"] Dec 09 09:05:37 crc kubenswrapper[4786]: I1209 09:05:37.983945 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b672-account-create-lpjbb" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.078241 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5cc88d847f-95rbt"] Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.091840 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7011-account-create-8bwd5" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.094159 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd6zq\" (UniqueName: \"kubernetes.io/projected/40435517-5196-4dbf-bb22-534c3fcb374f-kube-api-access-rd6zq\") pod \"40435517-5196-4dbf-bb22-534c3fcb374f\" (UID: \"40435517-5196-4dbf-bb22-534c3fcb374f\") " Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.094363 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kgc2\" (UniqueName: \"kubernetes.io/projected/9e6e672a-2488-41e8-8236-982d18b86886-kube-api-access-8kgc2\") pod \"9e6e672a-2488-41e8-8236-982d18b86886\" (UID: \"9e6e672a-2488-41e8-8236-982d18b86886\") " Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.118743 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40435517-5196-4dbf-bb22-534c3fcb374f-kube-api-access-rd6zq" (OuterVolumeSpecName: "kube-api-access-rd6zq") pod "40435517-5196-4dbf-bb22-534c3fcb374f" (UID: "40435517-5196-4dbf-bb22-534c3fcb374f"). InnerVolumeSpecName "kube-api-access-rd6zq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.120872 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e6e672a-2488-41e8-8236-982d18b86886-kube-api-access-8kgc2" (OuterVolumeSpecName: "kube-api-access-8kgc2") pod "9e6e672a-2488-41e8-8236-982d18b86886" (UID: "9e6e672a-2488-41e8-8236-982d18b86886"). InnerVolumeSpecName "kube-api-access-8kgc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:38 crc kubenswrapper[4786]: W1209 09:05:38.123684 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod20f84ee9_c044_45eb_830c_94578b1af666.slice/crio-900875be69fec6140abe3a8a49f292996ec10c1fa8ab1bb1446798094aa34368 WatchSource:0}: Error finding container 900875be69fec6140abe3a8a49f292996ec10c1fa8ab1bb1446798094aa34368: Status 404 returned error can't find the container with id 900875be69fec6140abe3a8a49f292996ec10c1fa8ab1bb1446798094aa34368 Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.151091 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f3f8-account-create-9pmhn" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.156112 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-f3f8-account-create-9pmhn" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.156136 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-f3f8-account-create-9pmhn" event={"ID":"9b61e55b-a701-42f7-b2c4-de979b971c9b","Type":"ContainerDied","Data":"6d2a65c148505e3e51e18a857e02351c606e25b51565ba45aa558b485da36b56"} Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.156504 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d2a65c148505e3e51e18a857e02351c606e25b51565ba45aa558b485da36b56" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.160747 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7011-account-create-8bwd5" event={"ID":"9e6e672a-2488-41e8-8236-982d18b86886","Type":"ContainerDied","Data":"171bde70333f72250cb71028d7a1983563046537158831fe2fe2f34045961802"} Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.160802 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="171bde70333f72250cb71028d7a1983563046537158831fe2fe2f34045961802" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.160833 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7011-account-create-8bwd5" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.174481 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-b672-account-create-lpjbb" event={"ID":"40435517-5196-4dbf-bb22-534c3fcb374f","Type":"ContainerDied","Data":"c32f2f1bf24dfa399eea274501493aa5cffc8ffae16361f8fcdd6b421f59dae0"} Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.174994 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32f2f1bf24dfa399eea274501493aa5cffc8ffae16361f8fcdd6b421f59dae0" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.175085 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-b672-account-create-lpjbb" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.198403 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rd6zq\" (UniqueName: \"kubernetes.io/projected/40435517-5196-4dbf-bb22-534c3fcb374f-kube-api-access-rd6zq\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.198475 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kgc2\" (UniqueName: \"kubernetes.io/projected/9e6e672a-2488-41e8-8236-982d18b86886-kube-api-access-8kgc2\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.203898 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc88d847f-95rbt" event={"ID":"20f84ee9-c044-45eb-830c-94578b1af666","Type":"ContainerStarted","Data":"900875be69fec6140abe3a8a49f292996ec10c1fa8ab1bb1446798094aa34368"} Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.208384 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" event={"ID":"7a3dff40-beee-4963-881e-647d277fcd7d","Type":"ContainerStarted","Data":"f1cc373c6c1d34668a86285c4d2ae905bc8c340ba40b4b71ec2c37d51916242d"} Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.208689 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.245514 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" podStartSLOduration=4.245459484 podStartE2EDuration="4.245459484s" podCreationTimestamp="2025-12-09 09:05:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:38.227211337 +0000 UTC m=+1304.110832583" watchObservedRunningTime="2025-12-09 09:05:38.245459484 +0000 UTC m=+1304.129080720" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.299269 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsz46\" (UniqueName: \"kubernetes.io/projected/9b61e55b-a701-42f7-b2c4-de979b971c9b-kube-api-access-hsz46\") pod \"9b61e55b-a701-42f7-b2c4-de979b971c9b\" (UID: \"9b61e55b-a701-42f7-b2c4-de979b971c9b\") " Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.342227 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b61e55b-a701-42f7-b2c4-de979b971c9b-kube-api-access-hsz46" (OuterVolumeSpecName: "kube-api-access-hsz46") pod "9b61e55b-a701-42f7-b2c4-de979b971c9b" (UID: "9b61e55b-a701-42f7-b2c4-de979b971c9b"). InnerVolumeSpecName "kube-api-access-hsz46". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:38 crc kubenswrapper[4786]: I1209 09:05:38.404305 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsz46\" (UniqueName: \"kubernetes.io/projected/9b61e55b-a701-42f7-b2c4-de979b971c9b-kube-api-access-hsz46\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:39 crc kubenswrapper[4786]: I1209 09:05:39.445693 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" path="/var/lib/kubelet/pods/a8e0a6f1-1d12-4f4c-b4d8-4223008772e2/volumes" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.056266 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-e5b7-account-create-4mwmr"] Dec 09 09:05:40 crc kubenswrapper[4786]: E1209 09:05:40.056916 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40435517-5196-4dbf-bb22-534c3fcb374f" containerName="mariadb-account-create" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.056938 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="40435517-5196-4dbf-bb22-534c3fcb374f" containerName="mariadb-account-create" Dec 09 09:05:40 crc kubenswrapper[4786]: E1209 09:05:40.056962 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e6e672a-2488-41e8-8236-982d18b86886" containerName="mariadb-account-create" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.056969 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e6e672a-2488-41e8-8236-982d18b86886" containerName="mariadb-account-create" Dec 09 09:05:40 crc kubenswrapper[4786]: E1209 09:05:40.056999 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" containerName="init" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.057007 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" containerName="init" Dec 09 09:05:40 crc kubenswrapper[4786]: E1209 09:05:40.057021 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b61e55b-a701-42f7-b2c4-de979b971c9b" containerName="mariadb-account-create" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.057028 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b61e55b-a701-42f7-b2c4-de979b971c9b" containerName="mariadb-account-create" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.057236 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e6e672a-2488-41e8-8236-982d18b86886" containerName="mariadb-account-create" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.057250 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="40435517-5196-4dbf-bb22-534c3fcb374f" containerName="mariadb-account-create" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.057261 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8e0a6f1-1d12-4f4c-b4d8-4223008772e2" containerName="init" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.057274 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b61e55b-a701-42f7-b2c4-de979b971c9b" containerName="mariadb-account-create" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.058198 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5b7-account-create-4mwmr" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.068238 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-6m994"] Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.070073 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.070888 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.075687 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h4v94" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.075687 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.077926 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e5b7-account-create-4mwmr"] Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.114503 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6m994"] Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.130270 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-combined-ca-bundle\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.130413 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-config-data\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.130478 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-db-sync-config-data\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.130545 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z8dz\" (UniqueName: \"kubernetes.io/projected/e040fc5c-c169-4e21-9158-fee40fcb1f6e-kube-api-access-2z8dz\") pod \"neutron-e5b7-account-create-4mwmr\" (UID: \"e040fc5c-c169-4e21-9158-fee40fcb1f6e\") " pod="openstack/neutron-e5b7-account-create-4mwmr" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.130614 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrsfh\" (UniqueName: \"kubernetes.io/projected/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-kube-api-access-nrsfh\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.233543 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-config-data\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.233634 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-db-sync-config-data\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.233694 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z8dz\" (UniqueName: \"kubernetes.io/projected/e040fc5c-c169-4e21-9158-fee40fcb1f6e-kube-api-access-2z8dz\") pod \"neutron-e5b7-account-create-4mwmr\" (UID: \"e040fc5c-c169-4e21-9158-fee40fcb1f6e\") " pod="openstack/neutron-e5b7-account-create-4mwmr" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.233755 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrsfh\" (UniqueName: \"kubernetes.io/projected/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-kube-api-access-nrsfh\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.233793 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-combined-ca-bundle\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.241825 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-db-sync-config-data\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.243850 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-config-data\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.254487 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-combined-ca-bundle\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.262845 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrsfh\" (UniqueName: \"kubernetes.io/projected/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-kube-api-access-nrsfh\") pod \"glance-db-sync-6m994\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " pod="openstack/glance-db-sync-6m994" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.264138 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z8dz\" (UniqueName: \"kubernetes.io/projected/e040fc5c-c169-4e21-9158-fee40fcb1f6e-kube-api-access-2z8dz\") pod \"neutron-e5b7-account-create-4mwmr\" (UID: \"e040fc5c-c169-4e21-9158-fee40fcb1f6e\") " pod="openstack/neutron-e5b7-account-create-4mwmr" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.396141 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5b7-account-create-4mwmr" Dec 09 09:05:40 crc kubenswrapper[4786]: I1209 09:05:40.409902 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6m994" Dec 09 09:05:41 crc kubenswrapper[4786]: I1209 09:05:41.474209 4786 generic.go:334] "Generic (PLEG): container finished" podID="67234e4f-66db-4e88-9da8-18841f39d886" containerID="2b7cd80ea41defb641d14baac19fb4e26904cbb3618d1e8606e8f6210b4843c2" exitCode=0 Dec 09 09:05:41 crc kubenswrapper[4786]: I1209 09:05:41.474273 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-n2tp7" event={"ID":"67234e4f-66db-4e88-9da8-18841f39d886","Type":"ContainerDied","Data":"2b7cd80ea41defb641d14baac19fb4e26904cbb3618d1e8606e8f6210b4843c2"} Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.564123 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f4489b979-86h4q"] Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.599022 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7bb95c86c4-js54w"] Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.600612 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.604536 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.622135 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bb95c86c4-js54w"] Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.874796 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-tls-certs\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.874930 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-secret-key\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.874972 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-scripts\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.875041 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-combined-ca-bundle\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.875097 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330dece7-bbfd-4d11-a979-b001581e8efe-logs\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.875139 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr4pj\" (UniqueName: \"kubernetes.io/projected/330dece7-bbfd-4d11-a979-b001581e8efe-kube-api-access-cr4pj\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.875167 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-config-data\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.887916 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cc88d847f-95rbt"] Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.980677 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-config-data\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.980793 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-tls-certs\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.980845 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-secret-key\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.980878 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-scripts\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.980942 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-combined-ca-bundle\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.980990 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330dece7-bbfd-4d11-a979-b001581e8efe-logs\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.981026 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cr4pj\" (UniqueName: \"kubernetes.io/projected/330dece7-bbfd-4d11-a979-b001581e8efe-kube-api-access-cr4pj\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.984076 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-config-data\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:42 crc kubenswrapper[4786]: I1209 09:05:42.986254 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-scripts\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.000734 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-secret-key\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.001636 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330dece7-bbfd-4d11-a979-b001581e8efe-logs\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.006862 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-tls-certs\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.013504 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-655866bfb6-4l6wv"] Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.015371 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.028205 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-combined-ca-bundle\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.039231 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cr4pj\" (UniqueName: \"kubernetes.io/projected/330dece7-bbfd-4d11-a979-b001581e8efe-kube-api-access-cr4pj\") pod \"horizon-7bb95c86c4-js54w\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.054473 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-655866bfb6-4l6wv"] Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.220193 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a5cc59-1e73-4e04-ba05-80f0c364b351-combined-ca-bundle\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.220989 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/52a5cc59-1e73-4e04-ba05-80f0c364b351-horizon-secret-key\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.221138 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j55kz\" (UniqueName: \"kubernetes.io/projected/52a5cc59-1e73-4e04-ba05-80f0c364b351-kube-api-access-j55kz\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.243710 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52a5cc59-1e73-4e04-ba05-80f0c364b351-config-data\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.243982 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52a5cc59-1e73-4e04-ba05-80f0c364b351-scripts\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.244019 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/52a5cc59-1e73-4e04-ba05-80f0c364b351-horizon-tls-certs\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.244248 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a5cc59-1e73-4e04-ba05-80f0c364b351-logs\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.238743 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.352372 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a5cc59-1e73-4e04-ba05-80f0c364b351-combined-ca-bundle\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.352543 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/52a5cc59-1e73-4e04-ba05-80f0c364b351-horizon-secret-key\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.352665 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j55kz\" (UniqueName: \"kubernetes.io/projected/52a5cc59-1e73-4e04-ba05-80f0c364b351-kube-api-access-j55kz\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.352722 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52a5cc59-1e73-4e04-ba05-80f0c364b351-config-data\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.352834 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52a5cc59-1e73-4e04-ba05-80f0c364b351-scripts\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.352866 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/52a5cc59-1e73-4e04-ba05-80f0c364b351-horizon-tls-certs\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.353016 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a5cc59-1e73-4e04-ba05-80f0c364b351-logs\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.390949 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/52a5cc59-1e73-4e04-ba05-80f0c364b351-logs\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.392363 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/52a5cc59-1e73-4e04-ba05-80f0c364b351-config-data\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.392931 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/52a5cc59-1e73-4e04-ba05-80f0c364b351-scripts\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.421833 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/52a5cc59-1e73-4e04-ba05-80f0c364b351-combined-ca-bundle\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.422928 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/52a5cc59-1e73-4e04-ba05-80f0c364b351-horizon-secret-key\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.442865 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j55kz\" (UniqueName: \"kubernetes.io/projected/52a5cc59-1e73-4e04-ba05-80f0c364b351-kube-api-access-j55kz\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.519906 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-9ddvk"] Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.520582 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/52a5cc59-1e73-4e04-ba05-80f0c364b351-horizon-tls-certs\") pod \"horizon-655866bfb6-4l6wv\" (UID: \"52a5cc59-1e73-4e04-ba05-80f0c364b351\") " pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.522540 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.530560 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9ddvk"] Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.530817 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.536483 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-jsl5j"] Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.539133 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.540027 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.540272 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.542127 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hjqlg" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.545359 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jsl5j"] Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.545738 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.556351 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mrw22" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.627955 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ccsg\" (UniqueName: \"kubernetes.io/projected/e3272921-6cce-4156-bed5-758d1a8a38f5-kube-api-access-9ccsg\") pod \"barbican-db-sync-jsl5j\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.628048 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c28b549-bfff-47f7-b262-c3203bd88cb1-etc-machine-id\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.628139 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-db-sync-config-data\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.628159 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz2nh\" (UniqueName: \"kubernetes.io/projected/8c28b549-bfff-47f7-b262-c3203bd88cb1-kube-api-access-kz2nh\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.628186 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-config-data\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.628257 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-db-sync-config-data\") pod \"barbican-db-sync-jsl5j\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.628279 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-combined-ca-bundle\") pod \"barbican-db-sync-jsl5j\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.628301 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-scripts\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.628332 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-combined-ca-bundle\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.733276 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-combined-ca-bundle\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.733564 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ccsg\" (UniqueName: \"kubernetes.io/projected/e3272921-6cce-4156-bed5-758d1a8a38f5-kube-api-access-9ccsg\") pod \"barbican-db-sync-jsl5j\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.733658 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c28b549-bfff-47f7-b262-c3203bd88cb1-etc-machine-id\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.733778 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-db-sync-config-data\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.733869 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz2nh\" (UniqueName: \"kubernetes.io/projected/8c28b549-bfff-47f7-b262-c3203bd88cb1-kube-api-access-kz2nh\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.733974 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-config-data\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.734196 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-db-sync-config-data\") pod \"barbican-db-sync-jsl5j\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.734272 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-combined-ca-bundle\") pod \"barbican-db-sync-jsl5j\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.734343 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-scripts\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.739589 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c28b549-bfff-47f7-b262-c3203bd88cb1-etc-machine-id\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.740167 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-db-sync-config-data\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.742593 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-combined-ca-bundle\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.745946 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-scripts\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.747647 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-config-data\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.748649 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-db-sync-config-data\") pod \"barbican-db-sync-jsl5j\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.752613 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-combined-ca-bundle\") pod \"barbican-db-sync-jsl5j\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.757522 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ccsg\" (UniqueName: \"kubernetes.io/projected/e3272921-6cce-4156-bed5-758d1a8a38f5-kube-api-access-9ccsg\") pod \"barbican-db-sync-jsl5j\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.766096 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz2nh\" (UniqueName: \"kubernetes.io/projected/8c28b549-bfff-47f7-b262-c3203bd88cb1-kube-api-access-kz2nh\") pod \"cinder-db-sync-9ddvk\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.902977 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:05:43 crc kubenswrapper[4786]: I1209 09:05:43.910524 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:05:44 crc kubenswrapper[4786]: I1209 09:05:44.826655 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:05:44 crc kubenswrapper[4786]: I1209 09:05:44.889056 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bd8458cdf-vnbxl"] Dec 09 09:05:44 crc kubenswrapper[4786]: I1209 09:05:44.889482 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" podUID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerName="dnsmasq-dns" containerID="cri-o://4d8633ed1b943d2e8543a48ffd3f0783b1cd8ad529d1cd89afb200d979505eb7" gracePeriod=10 Dec 09 09:05:45 crc kubenswrapper[4786]: I1209 09:05:45.588046 4786 generic.go:334] "Generic (PLEG): container finished" podID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerID="4d8633ed1b943d2e8543a48ffd3f0783b1cd8ad529d1cd89afb200d979505eb7" exitCode=0 Dec 09 09:05:45 crc kubenswrapper[4786]: I1209 09:05:45.588158 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" event={"ID":"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c","Type":"ContainerDied","Data":"4d8633ed1b943d2e8543a48ffd3f0783b1cd8ad529d1cd89afb200d979505eb7"} Dec 09 09:05:47 crc kubenswrapper[4786]: I1209 09:05:47.609384 4786 generic.go:334] "Generic (PLEG): container finished" podID="40cac51c-95d7-4eb4-8445-f83c35202ed4" containerID="00d414414fa82fdc80f7f1deb244a6a0c2ec6c083db69e22bceffe4abaf9dab2" exitCode=0 Dec 09 09:05:47 crc kubenswrapper[4786]: I1209 09:05:47.609824 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4pnnk" event={"ID":"40cac51c-95d7-4eb4-8445-f83c35202ed4","Type":"ContainerDied","Data":"00d414414fa82fdc80f7f1deb244a6a0c2ec6c083db69e22bceffe4abaf9dab2"} Dec 09 09:05:48 crc kubenswrapper[4786]: I1209 09:05:48.854755 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" podUID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: connect: connection refused" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.697583 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.698543 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.698848 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.200:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss7ch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-w9zct_openstack(39bcb4fa-81ad-4ec5-8168-efcb8da5ab61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.700153 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-w9zct" podUID="39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.751975 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.752074 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.752316 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.200:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n85h557h597h599hdch559hch579h558hb7h8bhdfhdfh684h658h64bh5bbhddh5dh5f6h9bh689h569h86h577h575hb8h678h7h87h59bh65fq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j9l8h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5cc88d847f-95rbt_openstack(20f84ee9-c044-45eb-830c-94578b1af666): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.772630 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.772702 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-horizon:watcher_latest" Dec 09 09:05:53 crc kubenswrapper[4786]: E1209 09:05:53.772929 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:38.102.83.200:5001/podified-master-centos10/openstack-horizon:watcher_latest,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n657h64bhcfh5ffh58h54bhc9hb6h68dhf8h5d5h5d8h684h5bh58bhdchcch5bbh58dhd6h5fch588h65ch8h556hd7h5d7h5bbh648hc5h4hdcq,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hhq6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-694bf999df-qrh2d_openstack(8985b7e0-0907-4cf8-ba9d-f75c5ff668da): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.841873 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.851066 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.889567 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-scripts\") pod \"40cac51c-95d7-4eb4-8445-f83c35202ed4\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.889633 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-config-data\") pod \"67234e4f-66db-4e88-9da8-18841f39d886\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.889702 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-combined-ca-bundle\") pod \"40cac51c-95d7-4eb4-8445-f83c35202ed4\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.889726 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-combined-ca-bundle\") pod \"67234e4f-66db-4e88-9da8-18841f39d886\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.889757 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5fb5\" (UniqueName: \"kubernetes.io/projected/67234e4f-66db-4e88-9da8-18841f39d886-kube-api-access-d5fb5\") pod \"67234e4f-66db-4e88-9da8-18841f39d886\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.889819 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-fernet-keys\") pod \"40cac51c-95d7-4eb4-8445-f83c35202ed4\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.889876 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjc9q\" (UniqueName: \"kubernetes.io/projected/40cac51c-95d7-4eb4-8445-f83c35202ed4-kube-api-access-tjc9q\") pod \"40cac51c-95d7-4eb4-8445-f83c35202ed4\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.889899 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-credential-keys\") pod \"40cac51c-95d7-4eb4-8445-f83c35202ed4\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.889967 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-db-sync-config-data\") pod \"67234e4f-66db-4e88-9da8-18841f39d886\" (UID: \"67234e4f-66db-4e88-9da8-18841f39d886\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.890030 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-config-data\") pod \"40cac51c-95d7-4eb4-8445-f83c35202ed4\" (UID: \"40cac51c-95d7-4eb4-8445-f83c35202ed4\") " Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.897904 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "40cac51c-95d7-4eb4-8445-f83c35202ed4" (UID: "40cac51c-95d7-4eb4-8445-f83c35202ed4"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.899642 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "40cac51c-95d7-4eb4-8445-f83c35202ed4" (UID: "40cac51c-95d7-4eb4-8445-f83c35202ed4"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.909799 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-scripts" (OuterVolumeSpecName: "scripts") pod "40cac51c-95d7-4eb4-8445-f83c35202ed4" (UID: "40cac51c-95d7-4eb4-8445-f83c35202ed4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.909859 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40cac51c-95d7-4eb4-8445-f83c35202ed4-kube-api-access-tjc9q" (OuterVolumeSpecName: "kube-api-access-tjc9q") pod "40cac51c-95d7-4eb4-8445-f83c35202ed4" (UID: "40cac51c-95d7-4eb4-8445-f83c35202ed4"). InnerVolumeSpecName "kube-api-access-tjc9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.909960 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "67234e4f-66db-4e88-9da8-18841f39d886" (UID: "67234e4f-66db-4e88-9da8-18841f39d886"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.930649 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67234e4f-66db-4e88-9da8-18841f39d886-kube-api-access-d5fb5" (OuterVolumeSpecName: "kube-api-access-d5fb5") pod "67234e4f-66db-4e88-9da8-18841f39d886" (UID: "67234e4f-66db-4e88-9da8-18841f39d886"). InnerVolumeSpecName "kube-api-access-d5fb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.995096 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.995716 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5fb5\" (UniqueName: \"kubernetes.io/projected/67234e4f-66db-4e88-9da8-18841f39d886-kube-api-access-d5fb5\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.995781 4786 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.995797 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjc9q\" (UniqueName: \"kubernetes.io/projected/40cac51c-95d7-4eb4-8445-f83c35202ed4-kube-api-access-tjc9q\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.995813 4786 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:53 crc kubenswrapper[4786]: I1209 09:05:53.995846 4786 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.016635 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "40cac51c-95d7-4eb4-8445-f83c35202ed4" (UID: "40cac51c-95d7-4eb4-8445-f83c35202ed4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.056771 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67234e4f-66db-4e88-9da8-18841f39d886" (UID: "67234e4f-66db-4e88-9da8-18841f39d886"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.063591 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-config-data" (OuterVolumeSpecName: "config-data") pod "67234e4f-66db-4e88-9da8-18841f39d886" (UID: "67234e4f-66db-4e88-9da8-18841f39d886"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.073187 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-config-data" (OuterVolumeSpecName: "config-data") pod "40cac51c-95d7-4eb4-8445-f83c35202ed4" (UID: "40cac51c-95d7-4eb4-8445-f83c35202ed4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.107229 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.107265 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.107279 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67234e4f-66db-4e88-9da8-18841f39d886-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.107289 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40cac51c-95d7-4eb4-8445-f83c35202ed4-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.246038 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.313262 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwq25\" (UniqueName: \"kubernetes.io/projected/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-kube-api-access-jwq25\") pod \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.313453 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-swift-storage-0\") pod \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.313532 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-sb\") pod \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.313644 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-nb\") pod \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.313672 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-config\") pod \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.313719 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-svc\") pod \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\" (UID: \"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c\") " Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.337958 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-kube-api-access-jwq25" (OuterVolumeSpecName: "kube-api-access-jwq25") pod "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" (UID: "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c"). InnerVolumeSpecName "kube-api-access-jwq25". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: E1209 09:05:54.404368 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-5cc88d847f-95rbt" podUID="20f84ee9-c044-45eb-830c-94578b1af666" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.428635 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwq25\" (UniqueName: \"kubernetes.io/projected/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-kube-api-access-jwq25\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.492716 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" (UID: "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.495856 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" (UID: "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: E1209 09:05:54.507748 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/horizon-694bf999df-qrh2d" podUID="8985b7e0-0907-4cf8-ba9d-f75c5ff668da" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.520404 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" (UID: "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.529615 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" (UID: "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.532186 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.532211 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.532225 4786 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.532237 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.535294 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-config" (OuterVolumeSpecName: "config") pod "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" (UID: "9dbbbcd2-908b-424f-b7e0-36ae5db9f92c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.609162 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-655866bfb6-4l6wv"] Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.637351 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.651905 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7bb95c86c4-js54w"] Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.668148 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-e5b7-account-create-4mwmr"] Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.681542 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-9ddvk"] Dec 09 09:05:54 crc kubenswrapper[4786]: W1209 09:05:54.685808 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode040fc5c_c169_4e21_9158_fee40fcb1f6e.slice/crio-316514d6250e6bc6db8b4fc80903dfa3c4bfdfadbd9b9dfa932ea43c7d0ec906 WatchSource:0}: Error finding container 316514d6250e6bc6db8b4fc80903dfa3c4bfdfadbd9b9dfa932ea43c7d0ec906: Status 404 returned error can't find the container with id 316514d6250e6bc6db8b4fc80903dfa3c4bfdfadbd9b9dfa932ea43c7d0ec906 Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.688705 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-n2tp7" event={"ID":"67234e4f-66db-4e88-9da8-18841f39d886","Type":"ContainerDied","Data":"f4f3a988e9917233202bdb44ff56ca874300c28b52bcb1c9c79cb208a4bc827c"} Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.688769 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4f3a988e9917233202bdb44ff56ca874300c28b52bcb1c9c79cb208a4bc827c" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.688848 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-n2tp7" Dec 09 09:05:54 crc kubenswrapper[4786]: W1209 09:05:54.689055 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8c28b549_bfff_47f7_b262_c3203bd88cb1.slice/crio-791444c398922b7702158ca0b7a6260a2420c6d680e1e82328689a9c295117f0 WatchSource:0}: Error finding container 791444c398922b7702158ca0b7a6260a2420c6d680e1e82328689a9c295117f0: Status 404 returned error can't find the container with id 791444c398922b7702158ca0b7a6260a2420c6d680e1e82328689a9c295117f0 Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.697685 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" event={"ID":"9dbbbcd2-908b-424f-b7e0-36ae5db9f92c","Type":"ContainerDied","Data":"6cc33d4b2c934fa8e6d1f4833c7aad4dabb4699b8bbaa0e063b4d619c515c61e"} Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.697740 4786 scope.go:117] "RemoveContainer" containerID="4d8633ed1b943d2e8543a48ffd3f0783b1cd8ad529d1cd89afb200d979505eb7" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.697871 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.708879 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-694bf999df-qrh2d" event={"ID":"8985b7e0-0907-4cf8-ba9d-f75c5ff668da","Type":"ContainerStarted","Data":"207f8077a13e9cc69a9aad5e73a895b3969ee5b203739d5e5dcb4e73f95b9625"} Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.708877 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-694bf999df-qrh2d" podUID="8985b7e0-0907-4cf8-ba9d-f75c5ff668da" containerName="horizon" containerID="cri-o://207f8077a13e9cc69a9aad5e73a895b3969ee5b203739d5e5dcb4e73f95b9625" gracePeriod=30 Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.713333 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f61efc97-8443-4381-85d3-7b6dd9e5b132","Type":"ContainerStarted","Data":"f505c423f86bd2d7755c8086eddc837313dd1a9d693a10fd132f29ab969f967a"} Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.716703 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655866bfb6-4l6wv" event={"ID":"52a5cc59-1e73-4e04-ba05-80f0c364b351","Type":"ContainerStarted","Data":"c246871e1061c18accde5193eea248c696fc27f4786c0fef41fd8a8be20f42f7"} Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.724014 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-4pnnk" event={"ID":"40cac51c-95d7-4eb4-8445-f83c35202ed4","Type":"ContainerDied","Data":"42a849523250efd9cef1714b6b62acb6347d2759c576d6eec22d33dfbea0d0f5"} Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.724103 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a849523250efd9cef1714b6b62acb6347d2759c576d6eec22d33dfbea0d0f5" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.724039 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-4pnnk" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.727250 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bb95c86c4-js54w" event={"ID":"330dece7-bbfd-4d11-a979-b001581e8efe","Type":"ContainerStarted","Data":"1fda65527e147d5938f0e57493df717449fa4e5dae7ff55560b08c74c9673eb0"} Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.747175 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f4489b979-86h4q" event={"ID":"94440ba5-1d12-4f3c-882c-2a648526482b","Type":"ContainerStarted","Data":"b8fda5fc34e5673d58791476d1af07bc51566cb08eca80d891a56fe03f6a1a12"} Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.759996 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc88d847f-95rbt" event={"ID":"20f84ee9-c044-45eb-830c-94578b1af666","Type":"ContainerStarted","Data":"fa1833d56f68d97e014e0bd1ce0cc31a5b9659b9ff5fe2c8e803fb54db3dd24f"} Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.760111 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-5cc88d847f-95rbt" podUID="20f84ee9-c044-45eb-830c-94578b1af666" containerName="horizon" containerID="cri-o://fa1833d56f68d97e014e0bd1ce0cc31a5b9659b9ff5fe2c8e803fb54db3dd24f" gracePeriod=30 Dec 09 09:05:54 crc kubenswrapper[4786]: E1209 09:05:54.772342 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-w9zct" podUID="39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.808331 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-jsl5j"] Dec 09 09:05:54 crc kubenswrapper[4786]: W1209 09:05:54.844101 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3272921_6cce_4156_bed5_758d1a8a38f5.slice/crio-a75a5e17733ca428624c5c6a954de48b07fc89f65ea8efb818f4071188456b99 WatchSource:0}: Error finding container a75a5e17733ca428624c5c6a954de48b07fc89f65ea8efb818f4071188456b99: Status 404 returned error can't find the container with id a75a5e17733ca428624c5c6a954de48b07fc89f65ea8efb818f4071188456b99 Dec 09 09:05:54 crc kubenswrapper[4786]: I1209 09:05:54.927312 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-6m994"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.047338 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-4pnnk"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.050400 4786 scope.go:117] "RemoveContainer" containerID="e9d6d67964600b9ba3d57f4da1a778e11c32ed5aaff0351ef0cd44fb664649ad" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.056406 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-4pnnk"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.183828 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bd8458cdf-vnbxl"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.278735 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40cac51c-95d7-4eb4-8445-f83c35202ed4" path="/var/lib/kubelet/pods/40cac51c-95d7-4eb4-8445-f83c35202ed4/volumes" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.285876 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bd8458cdf-vnbxl"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.310341 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-frjzv"] Dec 09 09:05:55 crc kubenswrapper[4786]: E1209 09:05:55.311265 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67234e4f-66db-4e88-9da8-18841f39d886" containerName="watcher-db-sync" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.311312 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="67234e4f-66db-4e88-9da8-18841f39d886" containerName="watcher-db-sync" Dec 09 09:05:55 crc kubenswrapper[4786]: E1209 09:05:55.311339 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40cac51c-95d7-4eb4-8445-f83c35202ed4" containerName="keystone-bootstrap" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.311348 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="40cac51c-95d7-4eb4-8445-f83c35202ed4" containerName="keystone-bootstrap" Dec 09 09:05:55 crc kubenswrapper[4786]: E1209 09:05:55.311361 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerName="dnsmasq-dns" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.311371 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerName="dnsmasq-dns" Dec 09 09:05:55 crc kubenswrapper[4786]: E1209 09:05:55.311393 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerName="init" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.311401 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerName="init" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.311872 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerName="dnsmasq-dns" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.311905 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="67234e4f-66db-4e88-9da8-18841f39d886" containerName="watcher-db-sync" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.311921 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="40cac51c-95d7-4eb4-8445-f83c35202ed4" containerName="keystone-bootstrap" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.313037 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.317278 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.317734 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.318211 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.318580 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrhs7" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.350177 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-frjzv"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.389586 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtglm\" (UniqueName: \"kubernetes.io/projected/36cfdfac-acea-4bea-95f3-7221ffc3d94b-kube-api-access-dtglm\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.389658 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-config-data\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.389700 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-credential-keys\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.389827 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-combined-ca-bundle\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.389989 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-scripts\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.390029 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-fernet-keys\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.493361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtglm\" (UniqueName: \"kubernetes.io/projected/36cfdfac-acea-4bea-95f3-7221ffc3d94b-kube-api-access-dtglm\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.493447 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-config-data\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.493481 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-credential-keys\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.493509 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-combined-ca-bundle\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.493640 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-scripts\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.493689 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-fernet-keys\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.520728 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.548765 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.550026 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.562348 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-w4qtx" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.562636 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.606000 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.606417 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.606577 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qsnkr\" (UniqueName: \"kubernetes.io/projected/a7aaee81-f3fd-443a-94ef-8b143139138b-kube-api-access-qsnkr\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.607421 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-config-data\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.607710 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aaee81-f3fd-443a-94ef-8b143139138b-logs\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.658543 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.660162 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.672049 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.674489 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.712958 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-config-data\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.713083 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c233b45-5e1c-4c8c-a3ba-d71a89838114-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.713183 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aaee81-f3fd-443a-94ef-8b143139138b-logs\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.713287 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c233b45-5e1c-4c8c-a3ba-d71a89838114-logs\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.713354 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.713382 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.713447 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qsnkr\" (UniqueName: \"kubernetes.io/projected/a7aaee81-f3fd-443a-94ef-8b143139138b-kube-api-access-qsnkr\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.713497 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c233b45-5e1c-4c8c-a3ba-d71a89838114-config-data\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.713523 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmzc\" (UniqueName: \"kubernetes.io/projected/6c233b45-5e1c-4c8c-a3ba-d71a89838114-kube-api-access-gjmzc\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.717389 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aaee81-f3fd-443a-94ef-8b143139138b-logs\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.733530 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.736589 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.749193 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.798594 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.811996 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-combined-ca-bundle\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.812082 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-config-data\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.812089 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-credential-keys\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.812651 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtglm\" (UniqueName: \"kubernetes.io/projected/36cfdfac-acea-4bea-95f3-7221ffc3d94b-kube-api-access-dtglm\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.813369 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.814832 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.815959 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdrvl\" (UniqueName: \"kubernetes.io/projected/8d930edc-ed97-418e-a47a-60f38b734a50-kube-api-access-pdrvl\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.816160 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qsnkr\" (UniqueName: \"kubernetes.io/projected/a7aaee81-f3fd-443a-94ef-8b143139138b-kube-api-access-qsnkr\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.816181 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-scripts\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.816243 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d930edc-ed97-418e-a47a-60f38b734a50-logs\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.816855 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c233b45-5e1c-4c8c-a3ba-d71a89838114-logs\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.817018 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.817125 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c233b45-5e1c-4c8c-a3ba-d71a89838114-config-data\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.817199 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjmzc\" (UniqueName: \"kubernetes.io/projected/6c233b45-5e1c-4c8c-a3ba-d71a89838114-kube-api-access-gjmzc\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.817362 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.818364 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-config-data\") pod \"watcher-api-0\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.818745 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6c233b45-5e1c-4c8c-a3ba-d71a89838114-logs\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.821921 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c233b45-5e1c-4c8c-a3ba-d71a89838114-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.822064 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.824909 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-fernet-keys\") pod \"keystone-bootstrap-frjzv\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.835876 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c233b45-5e1c-4c8c-a3ba-d71a89838114-config-data\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.841111 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c233b45-5e1c-4c8c-a3ba-d71a89838114-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.845764 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f4489b979-86h4q" event={"ID":"94440ba5-1d12-4f3c-882c-2a648526482b","Type":"ContainerStarted","Data":"6710f148bcf88b5cd40cc5be4c6982c16d963062b86becf8821d37cc011cfcc0"} Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.846211 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f4489b979-86h4q" podUID="94440ba5-1d12-4f3c-882c-2a648526482b" containerName="horizon-log" containerID="cri-o://b8fda5fc34e5673d58791476d1af07bc51566cb08eca80d891a56fe03f6a1a12" gracePeriod=30 Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.846848 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-f4489b979-86h4q" podUID="94440ba5-1d12-4f3c-882c-2a648526482b" containerName="horizon" containerID="cri-o://6710f148bcf88b5cd40cc5be4c6982c16d963062b86becf8821d37cc011cfcc0" gracePeriod=30 Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.861001 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6m994" event={"ID":"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c","Type":"ContainerStarted","Data":"11c44fbbf3908d028a4f5bd709ccf2bea2fd2b4b735d272d4316873dcce49074"} Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.873242 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjmzc\" (UniqueName: \"kubernetes.io/projected/6c233b45-5e1c-4c8c-a3ba-d71a89838114-kube-api-access-gjmzc\") pod \"watcher-applier-0\" (UID: \"6c233b45-5e1c-4c8c-a3ba-d71a89838114\") " pod="openstack/watcher-applier-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.877840 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ddvk" event={"ID":"8c28b549-bfff-47f7-b262-c3203bd88cb1","Type":"ContainerStarted","Data":"791444c398922b7702158ca0b7a6260a2420c6d680e1e82328689a9c295117f0"} Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.893158 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e5b7-account-create-4mwmr" event={"ID":"e040fc5c-c169-4e21-9158-fee40fcb1f6e","Type":"ContainerStarted","Data":"316514d6250e6bc6db8b4fc80903dfa3c4bfdfadbd9b9dfa932ea43c7d0ec906"} Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.903040 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-f4489b979-86h4q" podStartSLOduration=4.273651489 podStartE2EDuration="22.903009901s" podCreationTimestamp="2025-12-09 09:05:33 +0000 UTC" firstStartedPulling="2025-12-09 09:05:35.273524621 +0000 UTC m=+1301.157145847" lastFinishedPulling="2025-12-09 09:05:53.902883023 +0000 UTC m=+1319.786504259" observedRunningTime="2025-12-09 09:05:55.871173391 +0000 UTC m=+1321.754794627" watchObservedRunningTime="2025-12-09 09:05:55.903009901 +0000 UTC m=+1321.786631147" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.905401 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jsl5j" event={"ID":"e3272921-6cce-4156-bed5-758d1a8a38f5","Type":"ContainerStarted","Data":"a75a5e17733ca428624c5c6a954de48b07fc89f65ea8efb818f4071188456b99"} Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.910062 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.925084 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.925186 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.925220 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdrvl\" (UniqueName: \"kubernetes.io/projected/8d930edc-ed97-418e-a47a-60f38b734a50-kube-api-access-pdrvl\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.925243 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d930edc-ed97-418e-a47a-60f38b734a50-logs\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.925334 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.926650 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d930edc-ed97-418e-a47a-60f38b734a50-logs\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.933712 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-config-data\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.937002 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.938809 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:55 crc kubenswrapper[4786]: I1209 09:05:55.951012 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdrvl\" (UniqueName: \"kubernetes.io/projected/8d930edc-ed97-418e-a47a-60f38b734a50-kube-api-access-pdrvl\") pod \"watcher-decision-engine-0\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.013066 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.051241 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.153090 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.752002 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.797336 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-frjzv"] Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.807001 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.962129 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bb95c86c4-js54w" event={"ID":"330dece7-bbfd-4d11-a979-b001581e8efe","Type":"ContainerStarted","Data":"adb7ce8720775c0f5baa7ed8459b05d15e3f901e64d969d01e4906709eae1198"} Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.962463 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bb95c86c4-js54w" event={"ID":"330dece7-bbfd-4d11-a979-b001581e8efe","Type":"ContainerStarted","Data":"40f339e015d1848c2fc6bd112414ca260b45c03a9f05b2d019c828279a983372"} Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.986216 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.991622 4786 generic.go:334] "Generic (PLEG): container finished" podID="e040fc5c-c169-4e21-9158-fee40fcb1f6e" containerID="4c670c06e3d9510ec33202311d950769125eb9b70d3ad267d2e283127de5c33c" exitCode=0 Dec 09 09:05:56 crc kubenswrapper[4786]: I1209 09:05:56.991746 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e5b7-account-create-4mwmr" event={"ID":"e040fc5c-c169-4e21-9158-fee40fcb1f6e","Type":"ContainerDied","Data":"4c670c06e3d9510ec33202311d950769125eb9b70d3ad267d2e283127de5c33c"} Dec 09 09:05:57 crc kubenswrapper[4786]: I1209 09:05:57.013498 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 09 09:05:57 crc kubenswrapper[4786]: I1209 09:05:57.016372 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7bb95c86c4-js54w" podStartSLOduration=15.016355582 podStartE2EDuration="15.016355582s" podCreationTimestamp="2025-12-09 09:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:56.999810893 +0000 UTC m=+1322.883432139" watchObservedRunningTime="2025-12-09 09:05:57.016355582 +0000 UTC m=+1322.899976808" Dec 09 09:05:57 crc kubenswrapper[4786]: I1209 09:05:57.024385 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655866bfb6-4l6wv" event={"ID":"52a5cc59-1e73-4e04-ba05-80f0c364b351","Type":"ContainerStarted","Data":"bb582e1e0a4791e77044de6b090fee01117b5f49a2e718231f5c45c78735e9b6"} Dec 09 09:05:57 crc kubenswrapper[4786]: I1209 09:05:57.024523 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-655866bfb6-4l6wv" event={"ID":"52a5cc59-1e73-4e04-ba05-80f0c364b351","Type":"ContainerStarted","Data":"c23cc6efb32cfd0300cb0eea8f6549cf52356f684f86b2fae6f59b3d5ced7fff"} Dec 09 09:05:57 crc kubenswrapper[4786]: I1209 09:05:57.035202 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a7aaee81-f3fd-443a-94ef-8b143139138b","Type":"ContainerStarted","Data":"e19fdf64960f35475135549575202c1c2d8d0bc966513229046a9e58d62cbb34"} Dec 09 09:05:57 crc kubenswrapper[4786]: I1209 09:05:57.038381 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-frjzv" event={"ID":"36cfdfac-acea-4bea-95f3-7221ffc3d94b","Type":"ContainerStarted","Data":"4d2f350c8facddb0b5d194d3e150780761bd1fd43b3a01264b14e3e736ee105e"} Dec 09 09:05:57 crc kubenswrapper[4786]: I1209 09:05:57.072884 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-655866bfb6-4l6wv" podStartSLOduration=15.072849243 podStartE2EDuration="15.072849243s" podCreationTimestamp="2025-12-09 09:05:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:57.05479258 +0000 UTC m=+1322.938413806" watchObservedRunningTime="2025-12-09 09:05:57.072849243 +0000 UTC m=+1322.956470479" Dec 09 09:05:57 crc kubenswrapper[4786]: I1209 09:05:57.205013 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" path="/var/lib/kubelet/pods/9dbbbcd2-908b-424f-b7e0-36ae5db9f92c/volumes" Dec 09 09:05:58 crc kubenswrapper[4786]: I1209 09:05:58.589139 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5b7-account-create-4mwmr" Dec 09 09:05:58 crc kubenswrapper[4786]: I1209 09:05:58.752343 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z8dz\" (UniqueName: \"kubernetes.io/projected/e040fc5c-c169-4e21-9158-fee40fcb1f6e-kube-api-access-2z8dz\") pod \"e040fc5c-c169-4e21-9158-fee40fcb1f6e\" (UID: \"e040fc5c-c169-4e21-9158-fee40fcb1f6e\") " Dec 09 09:05:58 crc kubenswrapper[4786]: I1209 09:05:58.761890 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e040fc5c-c169-4e21-9158-fee40fcb1f6e-kube-api-access-2z8dz" (OuterVolumeSpecName: "kube-api-access-2z8dz") pod "e040fc5c-c169-4e21-9158-fee40fcb1f6e" (UID: "e040fc5c-c169-4e21-9158-fee40fcb1f6e"). InnerVolumeSpecName "kube-api-access-2z8dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:05:58 crc kubenswrapper[4786]: I1209 09:05:58.854329 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2z8dz\" (UniqueName: \"kubernetes.io/projected/e040fc5c-c169-4e21-9158-fee40fcb1f6e-kube-api-access-2z8dz\") on node \"crc\" DevicePath \"\"" Dec 09 09:05:58 crc kubenswrapper[4786]: I1209 09:05:58.855695 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5bd8458cdf-vnbxl" podUID="9dbbbcd2-908b-424f-b7e0-36ae5db9f92c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.134:5353: i/o timeout" Dec 09 09:05:59 crc kubenswrapper[4786]: I1209 09:05:59.083311 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6c233b45-5e1c-4c8c-a3ba-d71a89838114","Type":"ContainerStarted","Data":"5850825c83cd2300d976d728b7ed66d13144da2f6b4a7d948da99422f6af32fd"} Dec 09 09:05:59 crc kubenswrapper[4786]: I1209 09:05:59.090966 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a7aaee81-f3fd-443a-94ef-8b143139138b","Type":"ContainerStarted","Data":"e9045f5ac311801162c4a289491344af5209e8edb69b17eb036785f4844ad0df"} Dec 09 09:05:59 crc kubenswrapper[4786]: I1209 09:05:59.099367 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-frjzv" event={"ID":"36cfdfac-acea-4bea-95f3-7221ffc3d94b","Type":"ContainerStarted","Data":"9adc39d9005af4ca652a8f746f3029e57881da15a65377308b58a4f5d65d568c"} Dec 09 09:05:59 crc kubenswrapper[4786]: I1209 09:05:59.111260 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8d930edc-ed97-418e-a47a-60f38b734a50","Type":"ContainerStarted","Data":"3ee8f6a61ebe0cca99ca095208c9fe87eddcbbb6c4f62bbd6d4df863e6252c86"} Dec 09 09:05:59 crc kubenswrapper[4786]: I1209 09:05:59.120833 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-e5b7-account-create-4mwmr" event={"ID":"e040fc5c-c169-4e21-9158-fee40fcb1f6e","Type":"ContainerDied","Data":"316514d6250e6bc6db8b4fc80903dfa3c4bfdfadbd9b9dfa932ea43c7d0ec906"} Dec 09 09:05:59 crc kubenswrapper[4786]: I1209 09:05:59.120896 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="316514d6250e6bc6db8b4fc80903dfa3c4bfdfadbd9b9dfa932ea43c7d0ec906" Dec 09 09:05:59 crc kubenswrapper[4786]: I1209 09:05:59.120974 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-e5b7-account-create-4mwmr" Dec 09 09:05:59 crc kubenswrapper[4786]: I1209 09:05:59.145857 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-frjzv" podStartSLOduration=4.145838736 podStartE2EDuration="4.145838736s" podCreationTimestamp="2025-12-09 09:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:05:59.142567783 +0000 UTC m=+1325.026189009" watchObservedRunningTime="2025-12-09 09:05:59.145838736 +0000 UTC m=+1325.029459962" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.173902 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f61efc97-8443-4381-85d3-7b6dd9e5b132","Type":"ContainerStarted","Data":"28ffefa0be25980cd96ffd22439421c64f1ce69523a06e10f2a56403350dbbf3"} Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.184027 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a7aaee81-f3fd-443a-94ef-8b143139138b","Type":"ContainerStarted","Data":"937b9f4d291af6d40a7832aa4dbfc0fd57b1258b504f6ad5aea827fab7ad3b54"} Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.184567 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.223004 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.222970559 podStartE2EDuration="5.222970559s" podCreationTimestamp="2025-12-09 09:05:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:06:00.21313654 +0000 UTC m=+1326.096757776" watchObservedRunningTime="2025-12-09 09:06:00.222970559 +0000 UTC m=+1326.106591795" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.569039 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-hzbbl"] Dec 09 09:06:00 crc kubenswrapper[4786]: E1209 09:06:00.571131 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e040fc5c-c169-4e21-9158-fee40fcb1f6e" containerName="mariadb-account-create" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.571160 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e040fc5c-c169-4e21-9158-fee40fcb1f6e" containerName="mariadb-account-create" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.571466 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e040fc5c-c169-4e21-9158-fee40fcb1f6e" containerName="mariadb-account-create" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.572358 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.577100 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.577326 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j25lk" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.577915 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.595292 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hzbbl"] Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.749649 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-combined-ca-bundle\") pod \"neutron-db-sync-hzbbl\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.751163 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/1fe0b12b-06a5-45ae-8a51-073fb093cd54-kube-api-access-ph7lk\") pod \"neutron-db-sync-hzbbl\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.751532 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-config\") pod \"neutron-db-sync-hzbbl\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.853927 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-config\") pod \"neutron-db-sync-hzbbl\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.854037 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-combined-ca-bundle\") pod \"neutron-db-sync-hzbbl\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.854101 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/1fe0b12b-06a5-45ae-8a51-073fb093cd54-kube-api-access-ph7lk\") pod \"neutron-db-sync-hzbbl\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.864051 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-combined-ca-bundle\") pod \"neutron-db-sync-hzbbl\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.871130 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-config\") pod \"neutron-db-sync-hzbbl\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.879989 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/1fe0b12b-06a5-45ae-8a51-073fb093cd54-kube-api-access-ph7lk\") pod \"neutron-db-sync-hzbbl\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.912070 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 09 09:06:00 crc kubenswrapper[4786]: I1209 09:06:00.912804 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:06:02 crc kubenswrapper[4786]: I1209 09:06:02.211630 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:06:03 crc kubenswrapper[4786]: I1209 09:06:03.239631 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:06:03 crc kubenswrapper[4786]: I1209 09:06:03.239714 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:06:03 crc kubenswrapper[4786]: I1209 09:06:03.525019 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:06:03 crc kubenswrapper[4786]: I1209 09:06:03.525483 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:06:04 crc kubenswrapper[4786]: I1209 09:06:04.161556 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:06:04 crc kubenswrapper[4786]: I1209 09:06:04.249394 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 09 09:06:04 crc kubenswrapper[4786]: I1209 09:06:04.520394 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:06:05 crc kubenswrapper[4786]: I1209 09:06:05.911752 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Dec 09 09:06:05 crc kubenswrapper[4786]: I1209 09:06:05.920167 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Dec 09 09:06:06 crc kubenswrapper[4786]: I1209 09:06:06.288202 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 09 09:06:09 crc kubenswrapper[4786]: I1209 09:06:09.823507 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 09 09:06:09 crc kubenswrapper[4786]: I1209 09:06:09.824684 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" containerID="cri-o://e9045f5ac311801162c4a289491344af5209e8edb69b17eb036785f4844ad0df" gracePeriod=30 Dec 09 09:06:09 crc kubenswrapper[4786]: I1209 09:06:09.825302 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" containerID="cri-o://937b9f4d291af6d40a7832aa4dbfc0fd57b1258b504f6ad5aea827fab7ad3b54" gracePeriod=30 Dec 09 09:06:10 crc kubenswrapper[4786]: I1209 09:06:10.912517 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": dial tcp 10.217.0.158:9322: connect: connection refused" Dec 09 09:06:10 crc kubenswrapper[4786]: I1209 09:06:10.912664 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": dial tcp 10.217.0.158:9322: connect: connection refused" Dec 09 09:06:12 crc kubenswrapper[4786]: I1209 09:06:12.353201 4786 generic.go:334] "Generic (PLEG): container finished" podID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerID="937b9f4d291af6d40a7832aa4dbfc0fd57b1258b504f6ad5aea827fab7ad3b54" exitCode=0 Dec 09 09:06:12 crc kubenswrapper[4786]: I1209 09:06:12.353236 4786 generic.go:334] "Generic (PLEG): container finished" podID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerID="e9045f5ac311801162c4a289491344af5209e8edb69b17eb036785f4844ad0df" exitCode=143 Dec 09 09:06:12 crc kubenswrapper[4786]: I1209 09:06:12.353262 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a7aaee81-f3fd-443a-94ef-8b143139138b","Type":"ContainerDied","Data":"937b9f4d291af6d40a7832aa4dbfc0fd57b1258b504f6ad5aea827fab7ad3b54"} Dec 09 09:06:12 crc kubenswrapper[4786]: I1209 09:06:12.353310 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a7aaee81-f3fd-443a-94ef-8b143139138b","Type":"ContainerDied","Data":"e9045f5ac311801162c4a289491344af5209e8edb69b17eb036785f4844ad0df"} Dec 09 09:06:12 crc kubenswrapper[4786]: I1209 09:06:12.355517 4786 generic.go:334] "Generic (PLEG): container finished" podID="36cfdfac-acea-4bea-95f3-7221ffc3d94b" containerID="9adc39d9005af4ca652a8f746f3029e57881da15a65377308b58a4f5d65d568c" exitCode=0 Dec 09 09:06:12 crc kubenswrapper[4786]: I1209 09:06:12.355571 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-frjzv" event={"ID":"36cfdfac-acea-4bea-95f3-7221ffc3d94b","Type":"ContainerDied","Data":"9adc39d9005af4ca652a8f746f3029e57881da15a65377308b58a4f5d65d568c"} Dec 09 09:06:13 crc kubenswrapper[4786]: I1209 09:06:13.246361 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7bb95c86c4-js54w" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.153:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.153:8443: connect: connection refused" Dec 09 09:06:13 crc kubenswrapper[4786]: E1209 09:06:13.377661 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Dec 09 09:06:13 crc kubenswrapper[4786]: E1209 09:06:13.377761 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-barbican-api:watcher_latest" Dec 09 09:06:13 crc kubenswrapper[4786]: E1209 09:06:13.377930 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:38.102.83.200:5001/podified-master-centos10/openstack-barbican-api:watcher_latest,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ccsg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-jsl5j_openstack(e3272921-6cce-4156-bed5-758d1a8a38f5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:06:13 crc kubenswrapper[4786]: E1209 09:06:13.379890 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-jsl5j" podUID="e3272921-6cce-4156-bed5-758d1a8a38f5" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.184524 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.184931 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.185137 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-applier,Image:38.102.83.200:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c4h56ch5ddh678hf9h5fdhf9h686h595hfbh64h547hf9h76hc5hcch57bhdch567h689h67fhdh57dh8hddh588h57fh5bbh5bfh8bh78hd8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-applier-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/watcher,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gjmzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST watcher-applier],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST watcher-applier],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42451,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -r DRST watcher-applier],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-applier-0_openstack(6c233b45-5e1c-4c8c-a3ba-d71a89838114): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.186497 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-applier-0" podUID="6c233b45-5e1c-4c8c-a3ba-d71a89838114" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.382384 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-barbican-api:watcher_latest\\\"\"" pod="openstack/barbican-db-sync-jsl5j" podUID="e3272921-6cce-4156-bed5-758d1a8a38f5" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.382562 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-applier\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-watcher-applier:watcher_latest\\\"\"" pod="openstack/watcher-applier-0" podUID="6c233b45-5e1c-4c8c-a3ba-d71a89838114" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.729752 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.729814 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.729987 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:watcher-decision-engine,Image:38.102.83.200:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64h9chc8hcch686h5d5h589hbdh695hf4hfdh8bh5d8h88hf6h545h68dh64ch5bfh548h678h598h5fh696h5b4h687h94h57bh656h99h654h647q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:watcher-decision-engine-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/watcher,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:custom-prometheus-ca,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/prometheus/ca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pdrvl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42451,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pgrep -f -r DRST watcher-decision-engine],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-decision-engine-0_openstack(8d930edc-ed97-418e-a47a-60f38b734a50): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:06:14 crc kubenswrapper[4786]: E1209 09:06:14.731677 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/watcher-decision-engine-0" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" Dec 09 09:06:15 crc kubenswrapper[4786]: E1209 09:06:15.393577 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-watcher-decision-engine:watcher_latest\\\"\"" pod="openstack/watcher-decision-engine-0" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" Dec 09 09:06:15 crc kubenswrapper[4786]: I1209 09:06:15.599805 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:06:17 crc kubenswrapper[4786]: I1209 09:06:17.570072 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-655866bfb6-4l6wv" Dec 09 09:06:17 crc kubenswrapper[4786]: I1209 09:06:17.658595 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bb95c86c4-js54w"] Dec 09 09:06:17 crc kubenswrapper[4786]: I1209 09:06:17.659164 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bb95c86c4-js54w" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" containerName="horizon-log" containerID="cri-o://40f339e015d1848c2fc6bd112414ca260b45c03a9f05b2d019c828279a983372" gracePeriod=30 Dec 09 09:06:17 crc kubenswrapper[4786]: I1209 09:06:17.659369 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7bb95c86c4-js54w" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" containerName="horizon" containerID="cri-o://adb7ce8720775c0f5baa7ed8459b05d15e3f901e64d969d01e4906709eae1198" gracePeriod=30 Dec 09 09:06:20 crc kubenswrapper[4786]: I1209 09:06:20.912864 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": dial tcp 10.217.0.158:9322: i/o timeout (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:20 crc kubenswrapper[4786]: I1209 09:06:20.912876 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": dial tcp 10.217.0.158:9322: i/o timeout (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:23 crc kubenswrapper[4786]: I1209 09:06:23.494492 4786 generic.go:334] "Generic (PLEG): container finished" podID="330dece7-bbfd-4d11-a979-b001581e8efe" containerID="adb7ce8720775c0f5baa7ed8459b05d15e3f901e64d969d01e4906709eae1198" exitCode=0 Dec 09 09:06:23 crc kubenswrapper[4786]: I1209 09:06:23.494569 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bb95c86c4-js54w" event={"ID":"330dece7-bbfd-4d11-a979-b001581e8efe","Type":"ContainerDied","Data":"adb7ce8720775c0f5baa7ed8459b05d15e3f901e64d969d01e4906709eae1198"} Dec 09 09:06:24 crc kubenswrapper[4786]: W1209 09:06:24.812740 4786 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36cfdfac_acea_4bea_95f3_7221ffc3d94b.slice/crio-conmon-9adc39d9005af4ca652a8f746f3029e57881da15a65377308b58a4f5d65d568c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36cfdfac_acea_4bea_95f3_7221ffc3d94b.slice/crio-conmon-9adc39d9005af4ca652a8f746f3029e57881da15a65377308b58a4f5d65d568c.scope: no such file or directory Dec 09 09:06:24 crc kubenswrapper[4786]: W1209 09:06:24.812830 4786 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36cfdfac_acea_4bea_95f3_7221ffc3d94b.slice/crio-9adc39d9005af4ca652a8f746f3029e57881da15a65377308b58a4f5d65d568c.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36cfdfac_acea_4bea_95f3_7221ffc3d94b.slice/crio-9adc39d9005af4ca652a8f746f3029e57881da15a65377308b58a4f5d65d568c.scope: no such file or directory Dec 09 09:06:24 crc kubenswrapper[4786]: W1209 09:06:24.812957 4786 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aaee81_f3fd_443a_94ef_8b143139138b.slice/crio-conmon-937b9f4d291af6d40a7832aa4dbfc0fd57b1258b504f6ad5aea827fab7ad3b54.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aaee81_f3fd_443a_94ef_8b143139138b.slice/crio-conmon-937b9f4d291af6d40a7832aa4dbfc0fd57b1258b504f6ad5aea827fab7ad3b54.scope: no such file or directory Dec 09 09:06:24 crc kubenswrapper[4786]: W1209 09:06:24.818967 4786 watcher.go:93] Error while processing event ("/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aaee81_f3fd_443a_94ef_8b143139138b.slice/crio-937b9f4d291af6d40a7832aa4dbfc0fd57b1258b504f6ad5aea827fab7ad3b54.scope": 0x40000100 == IN_CREATE|IN_ISDIR): inotify_add_watch /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7aaee81_f3fd_443a_94ef_8b143139138b.slice/crio-937b9f4d291af6d40a7832aa4dbfc0fd57b1258b504f6ad5aea827fab7ad3b54.scope: no such file or directory Dec 09 09:06:25 crc kubenswrapper[4786]: I1209 09:06:25.519259 4786 generic.go:334] "Generic (PLEG): container finished" podID="20f84ee9-c044-45eb-830c-94578b1af666" containerID="fa1833d56f68d97e014e0bd1ce0cc31a5b9659b9ff5fe2c8e803fb54db3dd24f" exitCode=137 Dec 09 09:06:25 crc kubenswrapper[4786]: I1209 09:06:25.519401 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc88d847f-95rbt" event={"ID":"20f84ee9-c044-45eb-830c-94578b1af666","Type":"ContainerDied","Data":"fa1833d56f68d97e014e0bd1ce0cc31a5b9659b9ff5fe2c8e803fb54db3dd24f"} Dec 09 09:06:25 crc kubenswrapper[4786]: I1209 09:06:25.522780 4786 generic.go:334] "Generic (PLEG): container finished" podID="8985b7e0-0907-4cf8-ba9d-f75c5ff668da" containerID="207f8077a13e9cc69a9aad5e73a895b3969ee5b203739d5e5dcb4e73f95b9625" exitCode=137 Dec 09 09:06:25 crc kubenswrapper[4786]: I1209 09:06:25.522850 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-694bf999df-qrh2d" event={"ID":"8985b7e0-0907-4cf8-ba9d-f75c5ff668da","Type":"ContainerDied","Data":"207f8077a13e9cc69a9aad5e73a895b3969ee5b203739d5e5dcb4e73f95b9625"} Dec 09 09:06:25 crc kubenswrapper[4786]: I1209 09:06:25.914700 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:25 crc kubenswrapper[4786]: I1209 09:06:25.914734 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:25 crc kubenswrapper[4786]: I1209 09:06:25.915171 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 09 09:06:25 crc kubenswrapper[4786]: I1209 09:06:25.915259 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 09 09:06:26 crc kubenswrapper[4786]: I1209 09:06:26.533600 4786 generic.go:334] "Generic (PLEG): container finished" podID="94440ba5-1d12-4f3c-882c-2a648526482b" containerID="6710f148bcf88b5cd40cc5be4c6982c16d963062b86becf8821d37cc011cfcc0" exitCode=137 Dec 09 09:06:26 crc kubenswrapper[4786]: I1209 09:06:26.533670 4786 generic.go:334] "Generic (PLEG): container finished" podID="94440ba5-1d12-4f3c-882c-2a648526482b" containerID="b8fda5fc34e5673d58791476d1af07bc51566cb08eca80d891a56fe03f6a1a12" exitCode=137 Dec 09 09:06:26 crc kubenswrapper[4786]: I1209 09:06:26.533668 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f4489b979-86h4q" event={"ID":"94440ba5-1d12-4f3c-882c-2a648526482b","Type":"ContainerDied","Data":"6710f148bcf88b5cd40cc5be4c6982c16d963062b86becf8821d37cc011cfcc0"} Dec 09 09:06:26 crc kubenswrapper[4786]: I1209 09:06:26.533732 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f4489b979-86h4q" event={"ID":"94440ba5-1d12-4f3c-882c-2a648526482b","Type":"ContainerDied","Data":"b8fda5fc34e5673d58791476d1af07bc51566cb08eca80d891a56fe03f6a1a12"} Dec 09 09:06:27 crc kubenswrapper[4786]: E1209 09:06:27.471292 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Dec 09 09:06:27 crc kubenswrapper[4786]: E1209 09:06:27.471922 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-placement-api:watcher_latest" Dec 09 09:06:27 crc kubenswrapper[4786]: E1209 09:06:27.472130 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:placement-db-sync,Image:38.102.83.200:5001/podified-master-centos10/openstack-placement-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/placement,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:placement-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ss7ch,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42482,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-db-sync-w9zct_openstack(39bcb4fa-81ad-4ec5-8168-efcb8da5ab61): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:06:27 crc kubenswrapper[4786]: E1209 09:06:27.473331 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/placement-db-sync-w9zct" podUID="39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.595125 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lkwnh"] Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.602334 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.609200 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkwnh"] Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.773158 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4vhm\" (UniqueName: \"kubernetes.io/projected/656e7256-97a8-4036-b7b5-62c66bf06129-kube-api-access-t4vhm\") pod \"redhat-operators-lkwnh\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.773248 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-catalog-content\") pod \"redhat-operators-lkwnh\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.773386 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-utilities\") pod \"redhat-operators-lkwnh\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: E1209 09:06:27.828973 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Dec 09 09:06:27 crc kubenswrapper[4786]: E1209 09:06:27.829059 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-glance-api:watcher_latest" Dec 09 09:06:27 crc kubenswrapper[4786]: E1209 09:06:27.829235 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:38.102.83.200:5001/podified-master-centos10/openstack-glance-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nrsfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-6m994_openstack(4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:06:27 crc kubenswrapper[4786]: E1209 09:06:27.830484 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-6m994" podUID="4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.875774 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-utilities\") pod \"redhat-operators-lkwnh\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.875926 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4vhm\" (UniqueName: \"kubernetes.io/projected/656e7256-97a8-4036-b7b5-62c66bf06129-kube-api-access-t4vhm\") pod \"redhat-operators-lkwnh\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.876350 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-catalog-content\") pod \"redhat-operators-lkwnh\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.876562 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-utilities\") pod \"redhat-operators-lkwnh\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.876831 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-catalog-content\") pod \"redhat-operators-lkwnh\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.908264 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4vhm\" (UniqueName: \"kubernetes.io/projected/656e7256-97a8-4036-b7b5-62c66bf06129-kube-api-access-t4vhm\") pod \"redhat-operators-lkwnh\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:27 crc kubenswrapper[4786]: I1209 09:06:27.923262 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:06:28 crc kubenswrapper[4786]: E1209 09:06:28.553370 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-glance-api:watcher_latest\\\"\"" pod="openstack/glance-db-sync-6m994" podUID="4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" Dec 09 09:06:30 crc kubenswrapper[4786]: I1209 09:06:30.916054 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": dial tcp 10.217.0.158:9322: i/o timeout (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:30 crc kubenswrapper[4786]: I1209 09:06:30.916162 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:35 crc kubenswrapper[4786]: I1209 09:06:35.917144 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:35 crc kubenswrapper[4786]: I1209 09:06:35.917197 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.695104 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.708865 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"a7aaee81-f3fd-443a-94ef-8b143139138b","Type":"ContainerDied","Data":"e19fdf64960f35475135549575202c1c2d8d0bc966513229046a9e58d62cbb34"} Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.708938 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e19fdf64960f35475135549575202c1c2d8d0bc966513229046a9e58d62cbb34" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.714690 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.730957 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-frjzv" event={"ID":"36cfdfac-acea-4bea-95f3-7221ffc3d94b","Type":"ContainerDied","Data":"4d2f350c8facddb0b5d194d3e150780761bd1fd43b3a01264b14e3e736ee105e"} Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.731042 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d2f350c8facddb0b5d194d3e150780761bd1fd43b3a01264b14e3e736ee105e" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.731150 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-frjzv" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.742768 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-scripts\") pod \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.742982 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtglm\" (UniqueName: \"kubernetes.io/projected/36cfdfac-acea-4bea-95f3-7221ffc3d94b-kube-api-access-dtglm\") pod \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.743023 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-fernet-keys\") pod \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.743068 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-credential-keys\") pod \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.743155 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-config-data\") pod \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.743250 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-combined-ca-bundle\") pod \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\" (UID: \"36cfdfac-acea-4bea-95f3-7221ffc3d94b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.806849 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-scripts" (OuterVolumeSpecName: "scripts") pod "36cfdfac-acea-4bea-95f3-7221ffc3d94b" (UID: "36cfdfac-acea-4bea-95f3-7221ffc3d94b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.808220 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "36cfdfac-acea-4bea-95f3-7221ffc3d94b" (UID: "36cfdfac-acea-4bea-95f3-7221ffc3d94b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.808613 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36cfdfac-acea-4bea-95f3-7221ffc3d94b-kube-api-access-dtglm" (OuterVolumeSpecName: "kube-api-access-dtglm") pod "36cfdfac-acea-4bea-95f3-7221ffc3d94b" (UID: "36cfdfac-acea-4bea-95f3-7221ffc3d94b"). InnerVolumeSpecName "kube-api-access-dtglm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.812585 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "36cfdfac-acea-4bea-95f3-7221ffc3d94b" (UID: "36cfdfac-acea-4bea-95f3-7221ffc3d94b"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.819339 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-config-data" (OuterVolumeSpecName: "config-data") pod "36cfdfac-acea-4bea-95f3-7221ffc3d94b" (UID: "36cfdfac-acea-4bea-95f3-7221ffc3d94b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.831754 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36cfdfac-acea-4bea-95f3-7221ffc3d94b" (UID: "36cfdfac-acea-4bea-95f3-7221ffc3d94b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.845036 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-config-data\") pod \"a7aaee81-f3fd-443a-94ef-8b143139138b\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.845172 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qsnkr\" (UniqueName: \"kubernetes.io/projected/a7aaee81-f3fd-443a-94ef-8b143139138b-kube-api-access-qsnkr\") pod \"a7aaee81-f3fd-443a-94ef-8b143139138b\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.846000 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-custom-prometheus-ca\") pod \"a7aaee81-f3fd-443a-94ef-8b143139138b\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.846092 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-combined-ca-bundle\") pod \"a7aaee81-f3fd-443a-94ef-8b143139138b\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.846183 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aaee81-f3fd-443a-94ef-8b143139138b-logs\") pod \"a7aaee81-f3fd-443a-94ef-8b143139138b\" (UID: \"a7aaee81-f3fd-443a-94ef-8b143139138b\") " Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.846756 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7aaee81-f3fd-443a-94ef-8b143139138b-logs" (OuterVolumeSpecName: "logs") pod "a7aaee81-f3fd-443a-94ef-8b143139138b" (UID: "a7aaee81-f3fd-443a-94ef-8b143139138b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.847233 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.847345 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a7aaee81-f3fd-443a-94ef-8b143139138b-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.847440 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtglm\" (UniqueName: \"kubernetes.io/projected/36cfdfac-acea-4bea-95f3-7221ffc3d94b-kube-api-access-dtglm\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.847515 4786 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.847616 4786 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.847675 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.847733 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36cfdfac-acea-4bea-95f3-7221ffc3d94b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.849550 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7aaee81-f3fd-443a-94ef-8b143139138b-kube-api-access-qsnkr" (OuterVolumeSpecName: "kube-api-access-qsnkr") pod "a7aaee81-f3fd-443a-94ef-8b143139138b" (UID: "a7aaee81-f3fd-443a-94ef-8b143139138b"). InnerVolumeSpecName "kube-api-access-qsnkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.882750 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "a7aaee81-f3fd-443a-94ef-8b143139138b" (UID: "a7aaee81-f3fd-443a-94ef-8b143139138b"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.886821 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7aaee81-f3fd-443a-94ef-8b143139138b" (UID: "a7aaee81-f3fd-443a-94ef-8b143139138b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.940456 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-config-data" (OuterVolumeSpecName: "config-data") pod "a7aaee81-f3fd-443a-94ef-8b143139138b" (UID: "a7aaee81-f3fd-443a-94ef-8b143139138b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.953951 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.954080 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qsnkr\" (UniqueName: \"kubernetes.io/projected/a7aaee81-f3fd-443a-94ef-8b143139138b-kube-api-access-qsnkr\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.954106 4786 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:37 crc kubenswrapper[4786]: I1209 09:06:37.954119 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7aaee81-f3fd-443a-94ef-8b143139138b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:38 crc kubenswrapper[4786]: E1209 09:06:38.190753 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"placement-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-placement-api:watcher_latest\\\"\"" pod="openstack/placement-db-sync-w9zct" podUID="39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.749237 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.806120 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.838537 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.873976 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Dec 09 09:06:38 crc kubenswrapper[4786]: E1209 09:06:38.874601 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.874622 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" Dec 09 09:06:38 crc kubenswrapper[4786]: E1209 09:06:38.874637 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.874643 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" Dec 09 09:06:38 crc kubenswrapper[4786]: E1209 09:06:38.874672 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36cfdfac-acea-4bea-95f3-7221ffc3d94b" containerName="keystone-bootstrap" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.874680 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="36cfdfac-acea-4bea-95f3-7221ffc3d94b" containerName="keystone-bootstrap" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.874873 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="36cfdfac-acea-4bea-95f3-7221ffc3d94b" containerName="keystone-bootstrap" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.874891 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.874902 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.876171 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.880284 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.881160 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.881397 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.917405 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.932771 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bc97f477f-xr7xf"] Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.934527 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.939705 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.939971 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.940117 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.940224 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.940337 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.944878 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-zrhs7" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.948722 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bc97f477f-xr7xf"] Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990184 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-config-data\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990258 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990281 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-combined-ca-bundle\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990300 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-public-tls-certs\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990340 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-internal-tls-certs\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990357 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990377 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-config-data\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990402 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-scripts\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990438 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-fernet-keys\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990498 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990520 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjsg\" (UniqueName: \"kubernetes.io/projected/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-kube-api-access-rzjsg\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990544 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-credential-keys\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990574 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-public-tls-certs\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990612 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j67m\" (UniqueName: \"kubernetes.io/projected/0d0cae5a-5b07-4046-9640-9734ea4e44c4-kube-api-access-5j67m\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:38 crc kubenswrapper[4786]: I1209 09:06:38.990658 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d0cae5a-5b07-4046-9640-9734ea4e44c4-logs\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.093057 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5j67m\" (UniqueName: \"kubernetes.io/projected/0d0cae5a-5b07-4046-9640-9734ea4e44c4-kube-api-access-5j67m\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.093700 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d0cae5a-5b07-4046-9640-9734ea4e44c4-logs\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.094224 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d0cae5a-5b07-4046-9640-9734ea4e44c4-logs\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.094277 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-config-data\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095033 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095065 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-public-tls-certs\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095092 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-combined-ca-bundle\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095138 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095160 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-internal-tls-certs\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095178 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-config-data\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095206 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-scripts\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095233 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-fernet-keys\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095269 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095292 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzjsg\" (UniqueName: \"kubernetes.io/projected/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-kube-api-access-rzjsg\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095323 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-credential-keys\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.095354 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-public-tls-certs\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.103112 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-public-tls-certs\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.111739 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-scripts\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.114487 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-config-data\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.118325 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.119632 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-config-data\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.122149 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-combined-ca-bundle\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.122940 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-credential-keys\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.126099 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-public-tls-certs\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.139965 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.150028 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/0d0cae5a-5b07-4046-9640-9734ea4e44c4-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.154127 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5j67m\" (UniqueName: \"kubernetes.io/projected/0d0cae5a-5b07-4046-9640-9734ea4e44c4-kube-api-access-5j67m\") pod \"watcher-api-0\" (UID: \"0d0cae5a-5b07-4046-9640-9734ea4e44c4\") " pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.161413 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-internal-tls-certs\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.175400 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-fernet-keys\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.177100 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzjsg\" (UniqueName: \"kubernetes.io/projected/078ae2f6-b658-48a8-b4c2-cff5f3847bd3-kube-api-access-rzjsg\") pod \"keystone-bc97f477f-xr7xf\" (UID: \"078ae2f6-b658-48a8-b4c2-cff5f3847bd3\") " pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.236071 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" path="/var/lib/kubelet/pods/a7aaee81-f3fd-443a-94ef-8b143139138b/volumes" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.256201 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 09 09:06:39 crc kubenswrapper[4786]: I1209 09:06:39.268149 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:40 crc kubenswrapper[4786]: I1209 09:06:40.918635 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:40 crc kubenswrapper[4786]: I1209 09:06:40.918814 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="a7aaee81-f3fd-443a-94ef-8b143139138b" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.158:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:06:42 crc kubenswrapper[4786]: E1209 09:06:42.104012 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Dec 09 09:06:42 crc kubenswrapper[4786]: E1209 09:06:42.104534 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-cinder-api:watcher_latest" Dec 09 09:06:42 crc kubenswrapper[4786]: E1209 09:06:42.104723 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:38.102.83.200:5001/podified-master-centos10/openstack-cinder-api:watcher_latest,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz2nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-9ddvk_openstack(8c28b549-bfff-47f7-b262-c3203bd88cb1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:06:42 crc kubenswrapper[4786]: E1209 09:06:42.106274 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-9ddvk" podUID="8c28b549-bfff-47f7-b262-c3203bd88cb1" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.190479 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.214014 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.225531 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.307378 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20f84ee9-c044-45eb-830c-94578b1af666-horizon-secret-key\") pod \"20f84ee9-c044-45eb-830c-94578b1af666\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.307454 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-scripts\") pod \"20f84ee9-c044-45eb-830c-94578b1af666\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.308114 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20f84ee9-c044-45eb-830c-94578b1af666-logs\") pod \"20f84ee9-c044-45eb-830c-94578b1af666\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.308205 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-config-data\") pod \"20f84ee9-c044-45eb-830c-94578b1af666\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.308302 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9l8h\" (UniqueName: \"kubernetes.io/projected/20f84ee9-c044-45eb-830c-94578b1af666-kube-api-access-j9l8h\") pod \"20f84ee9-c044-45eb-830c-94578b1af666\" (UID: \"20f84ee9-c044-45eb-830c-94578b1af666\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.308877 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f84ee9-c044-45eb-830c-94578b1af666-logs" (OuterVolumeSpecName: "logs") pod "20f84ee9-c044-45eb-830c-94578b1af666" (UID: "20f84ee9-c044-45eb-830c-94578b1af666"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.312712 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20f84ee9-c044-45eb-830c-94578b1af666-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.316182 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f84ee9-c044-45eb-830c-94578b1af666-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "20f84ee9-c044-45eb-830c-94578b1af666" (UID: "20f84ee9-c044-45eb-830c-94578b1af666"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.316737 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f84ee9-c044-45eb-830c-94578b1af666-kube-api-access-j9l8h" (OuterVolumeSpecName: "kube-api-access-j9l8h") pod "20f84ee9-c044-45eb-830c-94578b1af666" (UID: "20f84ee9-c044-45eb-830c-94578b1af666"). InnerVolumeSpecName "kube-api-access-j9l8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.338922 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-config-data" (OuterVolumeSpecName: "config-data") pod "20f84ee9-c044-45eb-830c-94578b1af666" (UID: "20f84ee9-c044-45eb-830c-94578b1af666"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.343860 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-scripts" (OuterVolumeSpecName: "scripts") pod "20f84ee9-c044-45eb-830c-94578b1af666" (UID: "20f84ee9-c044-45eb-830c-94578b1af666"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414166 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-scripts\") pod \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414353 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94440ba5-1d12-4f3c-882c-2a648526482b-horizon-secret-key\") pod \"94440ba5-1d12-4f3c-882c-2a648526482b\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414458 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-config-data\") pod \"94440ba5-1d12-4f3c-882c-2a648526482b\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414504 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-horizon-secret-key\") pod \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414533 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-scripts\") pod \"94440ba5-1d12-4f3c-882c-2a648526482b\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414575 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-config-data\") pod \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414613 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhq6q\" (UniqueName: \"kubernetes.io/projected/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-kube-api-access-hhq6q\") pod \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414659 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lhd2\" (UniqueName: \"kubernetes.io/projected/94440ba5-1d12-4f3c-882c-2a648526482b-kube-api-access-7lhd2\") pod \"94440ba5-1d12-4f3c-882c-2a648526482b\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414764 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-logs\") pod \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\" (UID: \"8985b7e0-0907-4cf8-ba9d-f75c5ff668da\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.414809 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94440ba5-1d12-4f3c-882c-2a648526482b-logs\") pod \"94440ba5-1d12-4f3c-882c-2a648526482b\" (UID: \"94440ba5-1d12-4f3c-882c-2a648526482b\") " Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.415305 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.415337 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9l8h\" (UniqueName: \"kubernetes.io/projected/20f84ee9-c044-45eb-830c-94578b1af666-kube-api-access-j9l8h\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.415354 4786 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/20f84ee9-c044-45eb-830c-94578b1af666-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.415369 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/20f84ee9-c044-45eb-830c-94578b1af666-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.415848 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94440ba5-1d12-4f3c-882c-2a648526482b-logs" (OuterVolumeSpecName: "logs") pod "94440ba5-1d12-4f3c-882c-2a648526482b" (UID: "94440ba5-1d12-4f3c-882c-2a648526482b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.416446 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-logs" (OuterVolumeSpecName: "logs") pod "8985b7e0-0907-4cf8-ba9d-f75c5ff668da" (UID: "8985b7e0-0907-4cf8-ba9d-f75c5ff668da"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.419612 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-kube-api-access-hhq6q" (OuterVolumeSpecName: "kube-api-access-hhq6q") pod "8985b7e0-0907-4cf8-ba9d-f75c5ff668da" (UID: "8985b7e0-0907-4cf8-ba9d-f75c5ff668da"). InnerVolumeSpecName "kube-api-access-hhq6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.420174 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94440ba5-1d12-4f3c-882c-2a648526482b-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "94440ba5-1d12-4f3c-882c-2a648526482b" (UID: "94440ba5-1d12-4f3c-882c-2a648526482b"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.421575 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "8985b7e0-0907-4cf8-ba9d-f75c5ff668da" (UID: "8985b7e0-0907-4cf8-ba9d-f75c5ff668da"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.430941 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94440ba5-1d12-4f3c-882c-2a648526482b-kube-api-access-7lhd2" (OuterVolumeSpecName: "kube-api-access-7lhd2") pod "94440ba5-1d12-4f3c-882c-2a648526482b" (UID: "94440ba5-1d12-4f3c-882c-2a648526482b"). InnerVolumeSpecName "kube-api-access-7lhd2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.440697 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-scripts" (OuterVolumeSpecName: "scripts") pod "8985b7e0-0907-4cf8-ba9d-f75c5ff668da" (UID: "8985b7e0-0907-4cf8-ba9d-f75c5ff668da"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.447416 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-config-data" (OuterVolumeSpecName: "config-data") pod "94440ba5-1d12-4f3c-882c-2a648526482b" (UID: "94440ba5-1d12-4f3c-882c-2a648526482b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.447705 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-config-data" (OuterVolumeSpecName: "config-data") pod "8985b7e0-0907-4cf8-ba9d-f75c5ff668da" (UID: "8985b7e0-0907-4cf8-ba9d-f75c5ff668da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.448632 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-scripts" (OuterVolumeSpecName: "scripts") pod "94440ba5-1d12-4f3c-882c-2a648526482b" (UID: "94440ba5-1d12-4f3c-882c-2a648526482b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521103 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521167 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94440ba5-1d12-4f3c-882c-2a648526482b-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521212 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521249 4786 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/94440ba5-1d12-4f3c-882c-2a648526482b-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521267 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521346 4786 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521359 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/94440ba5-1d12-4f3c-882c-2a648526482b-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521372 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521383 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hhq6q\" (UniqueName: \"kubernetes.io/projected/8985b7e0-0907-4cf8-ba9d-f75c5ff668da-kube-api-access-hhq6q\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.521396 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lhd2\" (UniqueName: \"kubernetes.io/projected/94440ba5-1d12-4f3c-882c-2a648526482b-kube-api-access-7lhd2\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.567661 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-hzbbl"] Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.800764 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5cc88d847f-95rbt" event={"ID":"20f84ee9-c044-45eb-830c-94578b1af666","Type":"ContainerDied","Data":"900875be69fec6140abe3a8a49f292996ec10c1fa8ab1bb1446798094aa34368"} Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.800822 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5cc88d847f-95rbt" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.800854 4786 scope.go:117] "RemoveContainer" containerID="fa1833d56f68d97e014e0bd1ce0cc31a5b9659b9ff5fe2c8e803fb54db3dd24f" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.805407 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-694bf999df-qrh2d" event={"ID":"8985b7e0-0907-4cf8-ba9d-f75c5ff668da","Type":"ContainerDied","Data":"42e4f330e24a09c339b62a9dba5b0d4ee6bea7ef044103adaea047d9b855ca65"} Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.805535 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-694bf999df-qrh2d" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.807530 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-f4489b979-86h4q" event={"ID":"94440ba5-1d12-4f3c-882c-2a648526482b","Type":"ContainerDied","Data":"caec82397aa4663ca4990b3d340401d56bb902387122efb6d5c800acec74c017"} Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.807565 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-f4489b979-86h4q" Dec 09 09:06:42 crc kubenswrapper[4786]: E1209 09:06:42.809685 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-cinder-api:watcher_latest\\\"\"" pod="openstack/cinder-db-sync-9ddvk" podUID="8c28b549-bfff-47f7-b262-c3203bd88cb1" Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.894780 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-694bf999df-qrh2d"] Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.910410 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-694bf999df-qrh2d"] Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.920957 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-f4489b979-86h4q"] Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.933560 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-f4489b979-86h4q"] Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.958153 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5cc88d847f-95rbt"] Dec 09 09:06:42 crc kubenswrapper[4786]: I1209 09:06:42.967267 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5cc88d847f-95rbt"] Dec 09 09:06:43 crc kubenswrapper[4786]: I1209 09:06:43.200842 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f84ee9-c044-45eb-830c-94578b1af666" path="/var/lib/kubelet/pods/20f84ee9-c044-45eb-830c-94578b1af666/volumes" Dec 09 09:06:43 crc kubenswrapper[4786]: I1209 09:06:43.201682 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8985b7e0-0907-4cf8-ba9d-f75c5ff668da" path="/var/lib/kubelet/pods/8985b7e0-0907-4cf8-ba9d-f75c5ff668da/volumes" Dec 09 09:06:43 crc kubenswrapper[4786]: I1209 09:06:43.202393 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94440ba5-1d12-4f3c-882c-2a648526482b" path="/var/lib/kubelet/pods/94440ba5-1d12-4f3c-882c-2a648526482b/volumes" Dec 09 09:06:43 crc kubenswrapper[4786]: E1209 09:06:43.799483 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Dec 09 09:06:43 crc kubenswrapper[4786]: E1209 09:06:43.800129 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mlbfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(f61efc97-8443-4381-85d3-7b6dd9e5b132): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.077791 4786 scope.go:117] "RemoveContainer" containerID="207f8077a13e9cc69a9aad5e73a895b3969ee5b203739d5e5dcb4e73f95b9625" Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.429499 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bc97f477f-xr7xf"] Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.564648 4786 scope.go:117] "RemoveContainer" containerID="6710f148bcf88b5cd40cc5be4c6982c16d963062b86becf8821d37cc011cfcc0" Dec 09 09:06:44 crc kubenswrapper[4786]: W1209 09:06:44.569030 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod078ae2f6_b658_48a8_b4c2_cff5f3847bd3.slice/crio-a085cf761036f9ac9cc24b5afd7254db8727fd172b812aa26213cfccb4ba3204 WatchSource:0}: Error finding container a085cf761036f9ac9cc24b5afd7254db8727fd172b812aa26213cfccb4ba3204: Status 404 returned error can't find the container with id a085cf761036f9ac9cc24b5afd7254db8727fd172b812aa26213cfccb4ba3204 Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.742839 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.754854 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkwnh"] Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.757938 4786 scope.go:117] "RemoveContainer" containerID="b8fda5fc34e5673d58791476d1af07bc51566cb08eca80d891a56fe03f6a1a12" Dec 09 09:06:44 crc kubenswrapper[4786]: W1209 09:06:44.766351 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d0cae5a_5b07_4046_9640_9734ea4e44c4.slice/crio-f3db85d6a45738da514657675ae4689007893ee73fa52c9849987b9379b32011 WatchSource:0}: Error finding container f3db85d6a45738da514657675ae4689007893ee73fa52c9849987b9379b32011: Status 404 returned error can't find the container with id f3db85d6a45738da514657675ae4689007893ee73fa52c9849987b9379b32011 Dec 09 09:06:44 crc kubenswrapper[4786]: W1209 09:06:44.776444 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod656e7256_97a8_4036_b7b5_62c66bf06129.slice/crio-f3df0a9dee6f29339aacd6ebe0c819f5305e7c1cd9afaf1b8f239ed634fb9f84 WatchSource:0}: Error finding container f3df0a9dee6f29339aacd6ebe0c819f5305e7c1cd9afaf1b8f239ed634fb9f84: Status 404 returned error can't find the container with id f3df0a9dee6f29339aacd6ebe0c819f5305e7c1cd9afaf1b8f239ed634fb9f84 Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.833876 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"0d0cae5a-5b07-4046-9640-9734ea4e44c4","Type":"ContainerStarted","Data":"f3db85d6a45738da514657675ae4689007893ee73fa52c9849987b9379b32011"} Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.839622 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkwnh" event={"ID":"656e7256-97a8-4036-b7b5-62c66bf06129","Type":"ContainerStarted","Data":"f3df0a9dee6f29339aacd6ebe0c819f5305e7c1cd9afaf1b8f239ed634fb9f84"} Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.842340 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bc97f477f-xr7xf" event={"ID":"078ae2f6-b658-48a8-b4c2-cff5f3847bd3","Type":"ContainerStarted","Data":"a085cf761036f9ac9cc24b5afd7254db8727fd172b812aa26213cfccb4ba3204"} Dec 09 09:06:44 crc kubenswrapper[4786]: I1209 09:06:44.844347 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hzbbl" event={"ID":"1fe0b12b-06a5-45ae-8a51-073fb093cd54","Type":"ContainerStarted","Data":"e89c6020a450c4a6b67bf0b0224de9ad7c70944131d665069740cf1c065c2701"} Dec 09 09:06:45 crc kubenswrapper[4786]: I1209 09:06:45.861290 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"0d0cae5a-5b07-4046-9640-9734ea4e44c4","Type":"ContainerStarted","Data":"e43e853b48d51e02f6db38007dac71c116ded8e7a45b0b42faca899a079aa429"} Dec 09 09:06:45 crc kubenswrapper[4786]: I1209 09:06:45.863890 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jsl5j" event={"ID":"e3272921-6cce-4156-bed5-758d1a8a38f5","Type":"ContainerStarted","Data":"1499ce1ebbfcf214c8fe60281c634979c38386982715dfd174d486060d76bad5"} Dec 09 09:06:45 crc kubenswrapper[4786]: I1209 09:06:45.866154 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8d930edc-ed97-418e-a47a-60f38b734a50","Type":"ContainerStarted","Data":"445493b9101c88a21404c9cea095363a6c1996992f5a837e0ec6f3c4561c0a24"} Dec 09 09:06:45 crc kubenswrapper[4786]: I1209 09:06:45.868306 4786 generic.go:334] "Generic (PLEG): container finished" podID="656e7256-97a8-4036-b7b5-62c66bf06129" containerID="0b4de3677cf0c7237a2e571c18a4a548e87acc9675703432ec92e12110e9b3eb" exitCode=0 Dec 09 09:06:45 crc kubenswrapper[4786]: I1209 09:06:45.868371 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkwnh" event={"ID":"656e7256-97a8-4036-b7b5-62c66bf06129","Type":"ContainerDied","Data":"0b4de3677cf0c7237a2e571c18a4a548e87acc9675703432ec92e12110e9b3eb"} Dec 09 09:06:45 crc kubenswrapper[4786]: I1209 09:06:45.871211 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bc97f477f-xr7xf" event={"ID":"078ae2f6-b658-48a8-b4c2-cff5f3847bd3","Type":"ContainerStarted","Data":"0fb288594b2a61b8e22e0d8d176dc3379dcc8e1b6c2b34e905d0515f57d4f4c1"} Dec 09 09:06:45 crc kubenswrapper[4786]: I1209 09:06:45.873248 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hzbbl" event={"ID":"1fe0b12b-06a5-45ae-8a51-073fb093cd54","Type":"ContainerStarted","Data":"f9276b12c55a2e685ffdba9d25335ebeecbd4fe0e9092bd2d5e27acc02fe6a70"} Dec 09 09:06:45 crc kubenswrapper[4786]: I1209 09:06:45.874935 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"6c233b45-5e1c-4c8c-a3ba-d71a89838114","Type":"ContainerStarted","Data":"5999d740510f33e9fecbb9e808281e7a8fb9332109e867e6ea9f48dc8647235f"} Dec 09 09:06:45 crc kubenswrapper[4786]: I1209 09:06:45.896275 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-jsl5j" podStartSLOduration=13.588618553 podStartE2EDuration="1m2.896242013s" podCreationTimestamp="2025-12-09 09:05:43 +0000 UTC" firstStartedPulling="2025-12-09 09:05:54.849550846 +0000 UTC m=+1320.733172072" lastFinishedPulling="2025-12-09 09:06:44.157174306 +0000 UTC m=+1370.040795532" observedRunningTime="2025-12-09 09:06:45.884497081 +0000 UTC m=+1371.768118327" watchObservedRunningTime="2025-12-09 09:06:45.896242013 +0000 UTC m=+1371.779863249" Dec 09 09:06:46 crc kubenswrapper[4786]: I1209 09:06:46.905411 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:06:46 crc kubenswrapper[4786]: I1209 09:06:46.991030 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bc97f477f-xr7xf" podStartSLOduration=8.990985619 podStartE2EDuration="8.990985619s" podCreationTimestamp="2025-12-09 09:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:06:46.966008062 +0000 UTC m=+1372.849629288" watchObservedRunningTime="2025-12-09 09:06:46.990985619 +0000 UTC m=+1372.874606845" Dec 09 09:06:47 crc kubenswrapper[4786]: I1209 09:06:47.024183 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=6.117251225 podStartE2EDuration="52.024136079s" podCreationTimestamp="2025-12-09 09:05:55 +0000 UTC" firstStartedPulling="2025-12-09 09:05:58.251876808 +0000 UTC m=+1324.135498034" lastFinishedPulling="2025-12-09 09:06:44.158761652 +0000 UTC m=+1370.042382888" observedRunningTime="2025-12-09 09:06:46.988016903 +0000 UTC m=+1372.871638129" watchObservedRunningTime="2025-12-09 09:06:47.024136079 +0000 UTC m=+1372.907757325" Dec 09 09:06:47 crc kubenswrapper[4786]: I1209 09:06:47.031036 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-hzbbl" podStartSLOduration=47.031011491 podStartE2EDuration="47.031011491s" podCreationTimestamp="2025-12-09 09:06:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:06:47.011326872 +0000 UTC m=+1372.894948088" watchObservedRunningTime="2025-12-09 09:06:47.031011491 +0000 UTC m=+1372.914632737" Dec 09 09:06:47 crc kubenswrapper[4786]: I1209 09:06:47.045168 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=6.190918898 podStartE2EDuration="52.045142147s" podCreationTimestamp="2025-12-09 09:05:55 +0000 UTC" firstStartedPulling="2025-12-09 09:05:58.286164503 +0000 UTC m=+1324.169785729" lastFinishedPulling="2025-12-09 09:06:44.140387752 +0000 UTC m=+1370.024008978" observedRunningTime="2025-12-09 09:06:47.034671903 +0000 UTC m=+1372.918293139" watchObservedRunningTime="2025-12-09 09:06:47.045142147 +0000 UTC m=+1372.928763373" Dec 09 09:06:48 crc kubenswrapper[4786]: I1209 09:06:48.932623 4786 generic.go:334] "Generic (PLEG): container finished" podID="330dece7-bbfd-4d11-a979-b001581e8efe" containerID="40f339e015d1848c2fc6bd112414ca260b45c03a9f05b2d019c828279a983372" exitCode=137 Dec 09 09:06:48 crc kubenswrapper[4786]: I1209 09:06:48.932758 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bb95c86c4-js54w" event={"ID":"330dece7-bbfd-4d11-a979-b001581e8efe","Type":"ContainerDied","Data":"40f339e015d1848c2fc6bd112414ca260b45c03a9f05b2d019c828279a983372"} Dec 09 09:06:49 crc kubenswrapper[4786]: I1209 09:06:49.964831 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6m994" event={"ID":"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c","Type":"ContainerStarted","Data":"078da1a366c6f949783c1769fdafd18365b94f3cca28caf9b0b6b300905306da"} Dec 09 09:06:49 crc kubenswrapper[4786]: I1209 09:06:49.970919 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"0d0cae5a-5b07-4046-9640-9734ea4e44c4","Type":"ContainerStarted","Data":"73fb120fb9743f81a5d06ade3af77008fc900997bf741c6be727c4d995198079"} Dec 09 09:06:49 crc kubenswrapper[4786]: I1209 09:06:49.972321 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 09 09:06:49 crc kubenswrapper[4786]: I1209 09:06:49.992620 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-6m994" podStartSLOduration=17.934360577 podStartE2EDuration="1m9.992597324s" podCreationTimestamp="2025-12-09 09:05:40 +0000 UTC" firstStartedPulling="2025-12-09 09:05:54.953468424 +0000 UTC m=+1320.837089650" lastFinishedPulling="2025-12-09 09:06:47.011705171 +0000 UTC m=+1372.895326397" observedRunningTime="2025-12-09 09:06:49.989165347 +0000 UTC m=+1375.872786573" watchObservedRunningTime="2025-12-09 09:06:49.992597324 +0000 UTC m=+1375.876218550" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.035393 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=12.033179169 podStartE2EDuration="12.033179169s" podCreationTimestamp="2025-12-09 09:06:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:06:50.020519767 +0000 UTC m=+1375.904141003" watchObservedRunningTime="2025-12-09 09:06:50.033179169 +0000 UTC m=+1375.916800395" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.249988 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.379706 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-combined-ca-bundle\") pod \"330dece7-bbfd-4d11-a979-b001581e8efe\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.379811 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cr4pj\" (UniqueName: \"kubernetes.io/projected/330dece7-bbfd-4d11-a979-b001581e8efe-kube-api-access-cr4pj\") pod \"330dece7-bbfd-4d11-a979-b001581e8efe\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.379982 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-config-data\") pod \"330dece7-bbfd-4d11-a979-b001581e8efe\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.380120 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330dece7-bbfd-4d11-a979-b001581e8efe-logs\") pod \"330dece7-bbfd-4d11-a979-b001581e8efe\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.380210 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-secret-key\") pod \"330dece7-bbfd-4d11-a979-b001581e8efe\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.380342 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-scripts\") pod \"330dece7-bbfd-4d11-a979-b001581e8efe\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.380379 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-tls-certs\") pod \"330dece7-bbfd-4d11-a979-b001581e8efe\" (UID: \"330dece7-bbfd-4d11-a979-b001581e8efe\") " Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.382643 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/330dece7-bbfd-4d11-a979-b001581e8efe-logs" (OuterVolumeSpecName: "logs") pod "330dece7-bbfd-4d11-a979-b001581e8efe" (UID: "330dece7-bbfd-4d11-a979-b001581e8efe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.391608 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "330dece7-bbfd-4d11-a979-b001581e8efe" (UID: "330dece7-bbfd-4d11-a979-b001581e8efe"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.392007 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/330dece7-bbfd-4d11-a979-b001581e8efe-kube-api-access-cr4pj" (OuterVolumeSpecName: "kube-api-access-cr4pj") pod "330dece7-bbfd-4d11-a979-b001581e8efe" (UID: "330dece7-bbfd-4d11-a979-b001581e8efe"). InnerVolumeSpecName "kube-api-access-cr4pj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.411870 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-scripts" (OuterVolumeSpecName: "scripts") pod "330dece7-bbfd-4d11-a979-b001581e8efe" (UID: "330dece7-bbfd-4d11-a979-b001581e8efe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.415373 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "330dece7-bbfd-4d11-a979-b001581e8efe" (UID: "330dece7-bbfd-4d11-a979-b001581e8efe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.433327 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-config-data" (OuterVolumeSpecName: "config-data") pod "330dece7-bbfd-4d11-a979-b001581e8efe" (UID: "330dece7-bbfd-4d11-a979-b001581e8efe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.449786 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "330dece7-bbfd-4d11-a979-b001581e8efe" (UID: "330dece7-bbfd-4d11-a979-b001581e8efe"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.484309 4786 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.484373 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.484388 4786 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.484397 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/330dece7-bbfd-4d11-a979-b001581e8efe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.484410 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cr4pj\" (UniqueName: \"kubernetes.io/projected/330dece7-bbfd-4d11-a979-b001581e8efe-kube-api-access-cr4pj\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.484428 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/330dece7-bbfd-4d11-a979-b001581e8efe-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.484451 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/330dece7-bbfd-4d11-a979-b001581e8efe-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.988167 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9zct" event={"ID":"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61","Type":"ContainerStarted","Data":"3eb95625856726ad6f930e9ecbff06e8aee8033ad5525bd79883b1c025bff226"} Dec 09 09:06:50 crc kubenswrapper[4786]: I1209 09:06:50.991701 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkwnh" event={"ID":"656e7256-97a8-4036-b7b5-62c66bf06129","Type":"ContainerStarted","Data":"3a5e59e31eeea971b14a496ed30c481cb2ed4ad45a7db32403d50b6a81fa628d"} Dec 09 09:06:51 crc kubenswrapper[4786]: I1209 09:06:51.002660 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7bb95c86c4-js54w" event={"ID":"330dece7-bbfd-4d11-a979-b001581e8efe","Type":"ContainerDied","Data":"1fda65527e147d5938f0e57493df717449fa4e5dae7ff55560b08c74c9673eb0"} Dec 09 09:06:51 crc kubenswrapper[4786]: I1209 09:06:51.002416 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7bb95c86c4-js54w" Dec 09 09:06:51 crc kubenswrapper[4786]: I1209 09:06:51.002730 4786 scope.go:117] "RemoveContainer" containerID="adb7ce8720775c0f5baa7ed8459b05d15e3f901e64d969d01e4906709eae1198" Dec 09 09:06:51 crc kubenswrapper[4786]: I1209 09:06:51.034441 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-w9zct" podStartSLOduration=3.046685558 podStartE2EDuration="1m17.034384679s" podCreationTimestamp="2025-12-09 09:05:34 +0000 UTC" firstStartedPulling="2025-12-09 09:05:35.858444916 +0000 UTC m=+1301.742066142" lastFinishedPulling="2025-12-09 09:06:49.846144027 +0000 UTC m=+1375.729765263" observedRunningTime="2025-12-09 09:06:51.02499735 +0000 UTC m=+1376.908618576" watchObservedRunningTime="2025-12-09 09:06:51.034384679 +0000 UTC m=+1376.918005905" Dec 09 09:06:51 crc kubenswrapper[4786]: I1209 09:06:51.052836 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Dec 09 09:06:51 crc kubenswrapper[4786]: I1209 09:06:51.095486 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7bb95c86c4-js54w"] Dec 09 09:06:51 crc kubenswrapper[4786]: I1209 09:06:51.108241 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7bb95c86c4-js54w"] Dec 09 09:06:51 crc kubenswrapper[4786]: I1209 09:06:51.216525 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" path="/var/lib/kubelet/pods/330dece7-bbfd-4d11-a979-b001581e8efe/volumes" Dec 09 09:06:51 crc kubenswrapper[4786]: I1209 09:06:51.243452 4786 scope.go:117] "RemoveContainer" containerID="40f339e015d1848c2fc6bd112414ca260b45c03a9f05b2d019c828279a983372" Dec 09 09:06:52 crc kubenswrapper[4786]: I1209 09:06:52.027690 4786 generic.go:334] "Generic (PLEG): container finished" podID="656e7256-97a8-4036-b7b5-62c66bf06129" containerID="3a5e59e31eeea971b14a496ed30c481cb2ed4ad45a7db32403d50b6a81fa628d" exitCode=0 Dec 09 09:06:52 crc kubenswrapper[4786]: I1209 09:06:52.027786 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkwnh" event={"ID":"656e7256-97a8-4036-b7b5-62c66bf06129","Type":"ContainerDied","Data":"3a5e59e31eeea971b14a496ed30c481cb2ed4ad45a7db32403d50b6a81fa628d"} Dec 09 09:06:52 crc kubenswrapper[4786]: I1209 09:06:52.032711 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:06:53 crc kubenswrapper[4786]: I1209 09:06:53.174206 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 09 09:06:54 crc kubenswrapper[4786]: I1209 09:06:54.175491 4786 generic.go:334] "Generic (PLEG): container finished" podID="8d930edc-ed97-418e-a47a-60f38b734a50" containerID="445493b9101c88a21404c9cea095363a6c1996992f5a837e0ec6f3c4561c0a24" exitCode=1 Dec 09 09:06:54 crc kubenswrapper[4786]: I1209 09:06:54.175572 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8d930edc-ed97-418e-a47a-60f38b734a50","Type":"ContainerDied","Data":"445493b9101c88a21404c9cea095363a6c1996992f5a837e0ec6f3c4561c0a24"} Dec 09 09:06:54 crc kubenswrapper[4786]: I1209 09:06:54.177166 4786 scope.go:117] "RemoveContainer" containerID="445493b9101c88a21404c9cea095363a6c1996992f5a837e0ec6f3c4561c0a24" Dec 09 09:06:54 crc kubenswrapper[4786]: I1209 09:06:54.257751 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.052464 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.094003 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.159707 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.159764 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.159949 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.160019 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.244210 4786 generic.go:334] "Generic (PLEG): container finished" podID="e3272921-6cce-4156-bed5-758d1a8a38f5" containerID="1499ce1ebbfcf214c8fe60281c634979c38386982715dfd174d486060d76bad5" exitCode=0 Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.244361 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jsl5j" event={"ID":"e3272921-6cce-4156-bed5-758d1a8a38f5","Type":"ContainerDied","Data":"1499ce1ebbfcf214c8fe60281c634979c38386982715dfd174d486060d76bad5"} Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.250920 4786 generic.go:334] "Generic (PLEG): container finished" podID="39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" containerID="3eb95625856726ad6f930e9ecbff06e8aee8033ad5525bd79883b1c025bff226" exitCode=0 Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.250960 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9zct" event={"ID":"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61","Type":"ContainerDied","Data":"3eb95625856726ad6f930e9ecbff06e8aee8033ad5525bd79883b1c025bff226"} Dec 09 09:06:56 crc kubenswrapper[4786]: I1209 09:06:56.317210 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.661939 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.785924 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ccsg\" (UniqueName: \"kubernetes.io/projected/e3272921-6cce-4156-bed5-758d1a8a38f5-kube-api-access-9ccsg\") pod \"e3272921-6cce-4156-bed5-758d1a8a38f5\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.787127 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-db-sync-config-data\") pod \"e3272921-6cce-4156-bed5-758d1a8a38f5\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.787356 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-combined-ca-bundle\") pod \"e3272921-6cce-4156-bed5-758d1a8a38f5\" (UID: \"e3272921-6cce-4156-bed5-758d1a8a38f5\") " Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.796587 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3272921-6cce-4156-bed5-758d1a8a38f5-kube-api-access-9ccsg" (OuterVolumeSpecName: "kube-api-access-9ccsg") pod "e3272921-6cce-4156-bed5-758d1a8a38f5" (UID: "e3272921-6cce-4156-bed5-758d1a8a38f5"). InnerVolumeSpecName "kube-api-access-9ccsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.810589 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e3272921-6cce-4156-bed5-758d1a8a38f5" (UID: "e3272921-6cce-4156-bed5-758d1a8a38f5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.841530 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3272921-6cce-4156-bed5-758d1a8a38f5" (UID: "e3272921-6cce-4156-bed5-758d1a8a38f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.891776 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.891818 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ccsg\" (UniqueName: \"kubernetes.io/projected/e3272921-6cce-4156-bed5-758d1a8a38f5-kube-api-access-9ccsg\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.891832 4786 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e3272921-6cce-4156-bed5-758d1a8a38f5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:57 crc kubenswrapper[4786]: I1209 09:06:57.953542 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9zct" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.098559 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-scripts\") pod \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.099196 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-logs\") pod \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.099347 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-config-data\") pod \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.099401 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-combined-ca-bundle\") pod \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.099527 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss7ch\" (UniqueName: \"kubernetes.io/projected/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-kube-api-access-ss7ch\") pod \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\" (UID: \"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61\") " Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.100915 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-logs" (OuterVolumeSpecName: "logs") pod "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" (UID: "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.117047 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-scripts" (OuterVolumeSpecName: "scripts") pod "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" (UID: "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.117164 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-kube-api-access-ss7ch" (OuterVolumeSpecName: "kube-api-access-ss7ch") pod "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" (UID: "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61"). InnerVolumeSpecName "kube-api-access-ss7ch". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:06:58 crc kubenswrapper[4786]: E1209 09:06:58.131798 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.169129 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-config-data" (OuterVolumeSpecName: "config-data") pod "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" (UID: "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.186027 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" (UID: "39bcb4fa-81ad-4ec5-8168-efcb8da5ab61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.203603 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.203657 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.203671 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.203688 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss7ch\" (UniqueName: \"kubernetes.io/projected/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-kube-api-access-ss7ch\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.203703 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.307273 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-jsl5j" event={"ID":"e3272921-6cce-4156-bed5-758d1a8a38f5","Type":"ContainerDied","Data":"a75a5e17733ca428624c5c6a954de48b07fc89f65ea8efb818f4071188456b99"} Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.307336 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a75a5e17733ca428624c5c6a954de48b07fc89f65ea8efb818f4071188456b99" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.307508 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-jsl5j" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.318787 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-w9zct" event={"ID":"39bcb4fa-81ad-4ec5-8168-efcb8da5ab61","Type":"ContainerDied","Data":"4764cd5f08dffb192bdb35ea256c3203fd2e65c78ac7ac0b896d5dda640ee0bb"} Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.318856 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4764cd5f08dffb192bdb35ea256c3203fd2e65c78ac7ac0b896d5dda640ee0bb" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.318972 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-w9zct" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.336587 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8d930edc-ed97-418e-a47a-60f38b734a50","Type":"ContainerStarted","Data":"a560b8ce4c6949793bd95b3c67bc632fa0c176608dc24c790c22d5ee77043340"} Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.355043 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f61efc97-8443-4381-85d3-7b6dd9e5b132","Type":"ContainerStarted","Data":"b4d1a330526ee60509c7ca997f933fd2d6009b68ce24d2d289e4d812c0db1525"} Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.355325 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="ceilometer-central-agent" containerID="cri-o://f505c423f86bd2d7755c8086eddc837313dd1a9d693a10fd132f29ab969f967a" gracePeriod=30 Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.355411 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.355559 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="proxy-httpd" containerID="cri-o://b4d1a330526ee60509c7ca997f933fd2d6009b68ce24d2d289e4d812c0db1525" gracePeriod=30 Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.355666 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="ceilometer-notification-agent" containerID="cri-o://28ffefa0be25980cd96ffd22439421c64f1ce69523a06e10f2a56403350dbbf3" gracePeriod=30 Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.362679 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkwnh" event={"ID":"656e7256-97a8-4036-b7b5-62c66bf06129","Type":"ContainerStarted","Data":"ee190bcaee5b63121750529bb4b7d7c113572b7fd7d83479c45ee5897fd544a7"} Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.453797 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lkwnh" podStartSLOduration=20.699284836 podStartE2EDuration="31.453761884s" podCreationTimestamp="2025-12-09 09:06:27 +0000 UTC" firstStartedPulling="2025-12-09 09:06:46.998864745 +0000 UTC m=+1372.882485971" lastFinishedPulling="2025-12-09 09:06:57.753341793 +0000 UTC m=+1383.636963019" observedRunningTime="2025-12-09 09:06:58.445573881 +0000 UTC m=+1384.329195127" watchObservedRunningTime="2025-12-09 09:06:58.453761884 +0000 UTC m=+1384.337383120" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.543329 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-7ccfbc9bd6-76jl9"] Dec 09 09:06:58 crc kubenswrapper[4786]: E1209 09:06:58.544789 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.544819 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: E1209 09:06:58.544838 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" containerName="placement-db-sync" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.544848 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" containerName="placement-db-sync" Dec 09 09:06:58 crc kubenswrapper[4786]: E1209 09:06:58.544872 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" containerName="horizon-log" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.544881 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" containerName="horizon-log" Dec 09 09:06:58 crc kubenswrapper[4786]: E1209 09:06:58.544902 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8985b7e0-0907-4cf8-ba9d-f75c5ff668da" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.544909 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8985b7e0-0907-4cf8-ba9d-f75c5ff668da" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: E1209 09:06:58.544954 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3272921-6cce-4156-bed5-758d1a8a38f5" containerName="barbican-db-sync" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.544966 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3272921-6cce-4156-bed5-758d1a8a38f5" containerName="barbican-db-sync" Dec 09 09:06:58 crc kubenswrapper[4786]: E1209 09:06:58.544985 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f84ee9-c044-45eb-830c-94578b1af666" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.544993 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f84ee9-c044-45eb-830c-94578b1af666" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: E1209 09:06:58.545010 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94440ba5-1d12-4f3c-882c-2a648526482b" containerName="horizon-log" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545017 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="94440ba5-1d12-4f3c-882c-2a648526482b" containerName="horizon-log" Dec 09 09:06:58 crc kubenswrapper[4786]: E1209 09:06:58.545059 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94440ba5-1d12-4f3c-882c-2a648526482b" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545068 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="94440ba5-1d12-4f3c-882c-2a648526482b" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545355 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8985b7e0-0907-4cf8-ba9d-f75c5ff668da" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545390 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="94440ba5-1d12-4f3c-882c-2a648526482b" containerName="horizon-log" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545401 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" containerName="placement-db-sync" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545411 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f84ee9-c044-45eb-830c-94578b1af666" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545443 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="94440ba5-1d12-4f3c-882c-2a648526482b" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545457 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" containerName="horizon-log" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545475 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="330dece7-bbfd-4d11-a979-b001581e8efe" containerName="horizon" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.545483 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3272921-6cce-4156-bed5-758d1a8a38f5" containerName="barbican-db-sync" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.547118 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.559866 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.560301 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.560347 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.560804 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-z9zbd" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.565823 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.587998 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ccfbc9bd6-76jl9"] Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.841472 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-76847f447c-24chp"] Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.852707 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-public-tls-certs\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.852802 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-scripts\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.852826 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-internal-tls-certs\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.852873 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-logs\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.852906 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5zw9\" (UniqueName: \"kubernetes.io/projected/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-kube-api-access-s5zw9\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.852964 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-config-data\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.852995 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-combined-ca-bundle\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.856137 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.864506 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.864793 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.864914 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-mrw22" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.956313 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-config-data\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.956864 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515247e9-4278-40fa-b971-adb499dc3ce0-combined-ca-bundle\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.957041 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-combined-ca-bundle\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.961142 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/515247e9-4278-40fa-b971-adb499dc3ce0-config-data-custom\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.965080 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-public-tls-certs\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.965277 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-scripts\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.965374 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-internal-tls-certs\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.965481 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515247e9-4278-40fa-b971-adb499dc3ce0-config-data\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.965611 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/515247e9-4278-40fa-b971-adb499dc3ce0-logs\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.965756 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-logs\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.965900 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5zw9\" (UniqueName: \"kubernetes.io/projected/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-kube-api-access-s5zw9\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.966063 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qk6r\" (UniqueName: \"kubernetes.io/projected/515247e9-4278-40fa-b971-adb499dc3ce0-kube-api-access-6qk6r\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.974057 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-logs\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.977645 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-config-data\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:58 crc kubenswrapper[4786]: I1209 09:06:58.978513 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-combined-ca-bundle\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.011360 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-scripts\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.012398 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-public-tls-certs\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.021178 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-internal-tls-certs\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.021292 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-fc48d579d-jv5m2"] Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.023351 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.029294 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5zw9\" (UniqueName: \"kubernetes.io/projected/2cec6ba1-ef2c-4cf9-881b-fdc57687c17c-kube-api-access-s5zw9\") pod \"placement-7ccfbc9bd6-76jl9\" (UID: \"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c\") " pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.038005 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.052844 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76847f447c-24chp"] Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.070060 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qk6r\" (UniqueName: \"kubernetes.io/projected/515247e9-4278-40fa-b971-adb499dc3ce0-kube-api-access-6qk6r\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.070196 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515247e9-4278-40fa-b971-adb499dc3ce0-combined-ca-bundle\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.070296 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/515247e9-4278-40fa-b971-adb499dc3ce0-config-data-custom\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.070377 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515247e9-4278-40fa-b971-adb499dc3ce0-config-data\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.070417 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/515247e9-4278-40fa-b971-adb499dc3ce0-logs\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.071014 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/515247e9-4278-40fa-b971-adb499dc3ce0-logs\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.082232 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/515247e9-4278-40fa-b971-adb499dc3ce0-config-data\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.084402 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/515247e9-4278-40fa-b971-adb499dc3ce0-combined-ca-bundle\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.085801 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/515247e9-4278-40fa-b971-adb499dc3ce0-config-data-custom\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.125691 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fc48d579d-jv5m2"] Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.130611 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qk6r\" (UniqueName: \"kubernetes.io/projected/515247e9-4278-40fa-b971-adb499dc3ce0-kube-api-access-6qk6r\") pod \"barbican-worker-76847f447c-24chp\" (UID: \"515247e9-4278-40fa-b971-adb499dc3ce0\") " pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.172547 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r89pp\" (UniqueName: \"kubernetes.io/projected/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-kube-api-access-r89pp\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.172928 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-combined-ca-bundle\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.173205 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-config-data\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.173342 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-config-data-custom\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.173709 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-logs\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.206922 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.223635 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-76847f447c-24chp" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.233017 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58674975df-v9bj8"] Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.240224 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.246783 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58674975df-v9bj8"] Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.270988 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.333686 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5785d674c4-7cbc6"] Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.347329 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.351584 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5785d674c4-7cbc6"] Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.368693 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.381773 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-config-data-custom\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.423856 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.429555 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-logs\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.429747 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-combined-ca-bundle\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.429778 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r89pp\" (UniqueName: \"kubernetes.io/projected/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-kube-api-access-r89pp\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.430049 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-config-data\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.432755 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-logs\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.435389 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-config-data-custom\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.439712 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-config-data\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.444186 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-combined-ca-bundle\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.486857 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r89pp\" (UniqueName: \"kubernetes.io/projected/59cf76dd-ccdd-4aff-b6ae-a86c532b922c-kube-api-access-r89pp\") pod \"barbican-keystone-listener-fc48d579d-jv5m2\" (UID: \"59cf76dd-ccdd-4aff-b6ae-a86c532b922c\") " pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.500998 4786 generic.go:334] "Generic (PLEG): container finished" podID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerID="b4d1a330526ee60509c7ca997f933fd2d6009b68ce24d2d289e4d812c0db1525" exitCode=0 Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.501179 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f61efc97-8443-4381-85d3-7b6dd9e5b132","Type":"ContainerDied","Data":"b4d1a330526ee60509c7ca997f933fd2d6009b68ce24d2d289e4d812c0db1525"} Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537160 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537270 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-sb\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537297 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-config\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537334 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gngrz\" (UniqueName: \"kubernetes.io/projected/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-kube-api-access-gngrz\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537396 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfhd5\" (UniqueName: \"kubernetes.io/projected/b3b7375b-20bf-4c11-bbff-ae554685503a-kube-api-access-zfhd5\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537451 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-nb\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537488 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3b7375b-20bf-4c11-bbff-ae554685503a-logs\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537528 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-svc\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537573 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-swift-storage-0\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537628 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data-custom\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.537670 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-combined-ca-bundle\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.577272 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646312 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646401 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-sb\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646448 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-config\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646484 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gngrz\" (UniqueName: \"kubernetes.io/projected/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-kube-api-access-gngrz\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646536 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfhd5\" (UniqueName: \"kubernetes.io/projected/b3b7375b-20bf-4c11-bbff-ae554685503a-kube-api-access-zfhd5\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646568 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-nb\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646604 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3b7375b-20bf-4c11-bbff-ae554685503a-logs\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646632 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-svc\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646660 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-swift-storage-0\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646705 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data-custom\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.646740 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-combined-ca-bundle\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.658155 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-nb\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.670509 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3b7375b-20bf-4c11-bbff-ae554685503a-logs\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.671688 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-sb\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.680239 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-swift-storage-0\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.680482 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-svc\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.685291 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-config\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.701517 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.716873 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-combined-ca-bundle\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.718467 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data-custom\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.737756 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gngrz\" (UniqueName: \"kubernetes.io/projected/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-kube-api-access-gngrz\") pod \"dnsmasq-dns-58674975df-v9bj8\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.741018 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfhd5\" (UniqueName: \"kubernetes.io/projected/b3b7375b-20bf-4c11-bbff-ae554685503a-kube-api-access-zfhd5\") pod \"barbican-api-5785d674c4-7cbc6\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.755267 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:06:59 crc kubenswrapper[4786]: I1209 09:06:59.947188 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:07:00 crc kubenswrapper[4786]: I1209 09:07:00.253525 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-7ccfbc9bd6-76jl9"] Dec 09 09:07:00 crc kubenswrapper[4786]: I1209 09:07:00.437121 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-76847f447c-24chp"] Dec 09 09:07:00 crc kubenswrapper[4786]: I1209 09:07:00.598069 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ccfbc9bd6-76jl9" event={"ID":"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c","Type":"ContainerStarted","Data":"454ecf60c38e5538e2ca2ba83dc138ee44feb7ce0fc8f1959d729b690f6abdc3"} Dec 09 09:07:00 crc kubenswrapper[4786]: I1209 09:07:00.618820 4786 generic.go:334] "Generic (PLEG): container finished" podID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerID="f505c423f86bd2d7755c8086eddc837313dd1a9d693a10fd132f29ab969f967a" exitCode=0 Dec 09 09:07:00 crc kubenswrapper[4786]: I1209 09:07:00.619574 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f61efc97-8443-4381-85d3-7b6dd9e5b132","Type":"ContainerDied","Data":"f505c423f86bd2d7755c8086eddc837313dd1a9d693a10fd132f29ab969f967a"} Dec 09 09:07:00 crc kubenswrapper[4786]: I1209 09:07:00.638754 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ddvk" event={"ID":"8c28b549-bfff-47f7-b262-c3203bd88cb1","Type":"ContainerStarted","Data":"d9a2cc35960764204adb4db837e952174a5c44528e376a8416466db2145ebe1c"} Dec 09 09:07:00 crc kubenswrapper[4786]: I1209 09:07:00.646365 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76847f447c-24chp" event={"ID":"515247e9-4278-40fa-b971-adb499dc3ce0","Type":"ContainerStarted","Data":"2e122274387134e97fe45ad28f734faa9e5a8ad3502656aecc43ee53568b9e3f"} Dec 09 09:07:00 crc kubenswrapper[4786]: I1209 09:07:00.672804 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 09 09:07:00 crc kubenswrapper[4786]: I1209 09:07:00.708753 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-9ddvk" podStartSLOduration=14.653515723 podStartE2EDuration="1m17.708719666s" podCreationTimestamp="2025-12-09 09:05:43 +0000 UTC" firstStartedPulling="2025-12-09 09:05:54.699651673 +0000 UTC m=+1320.583272899" lastFinishedPulling="2025-12-09 09:06:57.754855616 +0000 UTC m=+1383.638476842" observedRunningTime="2025-12-09 09:07:00.686585913 +0000 UTC m=+1386.570207139" watchObservedRunningTime="2025-12-09 09:07:00.708719666 +0000 UTC m=+1386.592340892" Dec 09 09:07:01 crc kubenswrapper[4786]: I1209 09:07:01.310546 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5785d674c4-7cbc6"] Dec 09 09:07:01 crc kubenswrapper[4786]: I1209 09:07:01.328796 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-fc48d579d-jv5m2"] Dec 09 09:07:01 crc kubenswrapper[4786]: W1209 09:07:01.414061 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3b7375b_20bf_4c11_bbff_ae554685503a.slice/crio-9ee333f2ffce2ff50fc51681f31a60e2d89a893e0e7a3b60ca1c38f8ad361dde WatchSource:0}: Error finding container 9ee333f2ffce2ff50fc51681f31a60e2d89a893e0e7a3b60ca1c38f8ad361dde: Status 404 returned error can't find the container with id 9ee333f2ffce2ff50fc51681f31a60e2d89a893e0e7a3b60ca1c38f8ad361dde Dec 09 09:07:01 crc kubenswrapper[4786]: I1209 09:07:01.629012 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58674975df-v9bj8"] Dec 09 09:07:01 crc kubenswrapper[4786]: I1209 09:07:01.690837 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ccfbc9bd6-76jl9" event={"ID":"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c","Type":"ContainerStarted","Data":"6b0dd9894559fa82d775f2eaec19ff7954b4c3778fd8c6c06cb3782f8b8ef12e"} Dec 09 09:07:01 crc kubenswrapper[4786]: I1209 09:07:01.748003 4786 generic.go:334] "Generic (PLEG): container finished" podID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerID="28ffefa0be25980cd96ffd22439421c64f1ce69523a06e10f2a56403350dbbf3" exitCode=0 Dec 09 09:07:01 crc kubenswrapper[4786]: I1209 09:07:01.748113 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f61efc97-8443-4381-85d3-7b6dd9e5b132","Type":"ContainerDied","Data":"28ffefa0be25980cd96ffd22439421c64f1ce69523a06e10f2a56403350dbbf3"} Dec 09 09:07:01 crc kubenswrapper[4786]: I1209 09:07:01.752767 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5785d674c4-7cbc6" event={"ID":"b3b7375b-20bf-4c11-bbff-ae554685503a","Type":"ContainerStarted","Data":"9ee333f2ffce2ff50fc51681f31a60e2d89a893e0e7a3b60ca1c38f8ad361dde"} Dec 09 09:07:01 crc kubenswrapper[4786]: I1209 09:07:01.757559 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" event={"ID":"59cf76dd-ccdd-4aff-b6ae-a86c532b922c","Type":"ContainerStarted","Data":"96c1a9d14f01f7fc9876684dc09fbca3380595a915d0c2df666a33306a514905"} Dec 09 09:07:01 crc kubenswrapper[4786]: I1209 09:07:01.979606 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.016705 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-sg-core-conf-yaml\") pod \"f61efc97-8443-4381-85d3-7b6dd9e5b132\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.016803 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-config-data\") pod \"f61efc97-8443-4381-85d3-7b6dd9e5b132\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.017064 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-run-httpd\") pod \"f61efc97-8443-4381-85d3-7b6dd9e5b132\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.017246 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-scripts\") pod \"f61efc97-8443-4381-85d3-7b6dd9e5b132\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.017294 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-log-httpd\") pod \"f61efc97-8443-4381-85d3-7b6dd9e5b132\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.017325 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlbfj\" (UniqueName: \"kubernetes.io/projected/f61efc97-8443-4381-85d3-7b6dd9e5b132-kube-api-access-mlbfj\") pod \"f61efc97-8443-4381-85d3-7b6dd9e5b132\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.017381 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-combined-ca-bundle\") pod \"f61efc97-8443-4381-85d3-7b6dd9e5b132\" (UID: \"f61efc97-8443-4381-85d3-7b6dd9e5b132\") " Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.018493 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f61efc97-8443-4381-85d3-7b6dd9e5b132" (UID: "f61efc97-8443-4381-85d3-7b6dd9e5b132"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.018674 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f61efc97-8443-4381-85d3-7b6dd9e5b132" (UID: "f61efc97-8443-4381-85d3-7b6dd9e5b132"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.019748 4786 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.019775 4786 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f61efc97-8443-4381-85d3-7b6dd9e5b132-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.031744 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f61efc97-8443-4381-85d3-7b6dd9e5b132-kube-api-access-mlbfj" (OuterVolumeSpecName: "kube-api-access-mlbfj") pod "f61efc97-8443-4381-85d3-7b6dd9e5b132" (UID: "f61efc97-8443-4381-85d3-7b6dd9e5b132"). InnerVolumeSpecName "kube-api-access-mlbfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.032822 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-scripts" (OuterVolumeSpecName: "scripts") pod "f61efc97-8443-4381-85d3-7b6dd9e5b132" (UID: "f61efc97-8443-4381-85d3-7b6dd9e5b132"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.037806 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f61efc97-8443-4381-85d3-7b6dd9e5b132" (UID: "f61efc97-8443-4381-85d3-7b6dd9e5b132"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.142399 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.144923 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlbfj\" (UniqueName: \"kubernetes.io/projected/f61efc97-8443-4381-85d3-7b6dd9e5b132-kube-api-access-mlbfj\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.145019 4786 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.719679 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f61efc97-8443-4381-85d3-7b6dd9e5b132" (UID: "f61efc97-8443-4381-85d3-7b6dd9e5b132"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.761109 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-config-data" (OuterVolumeSpecName: "config-data") pod "f61efc97-8443-4381-85d3-7b6dd9e5b132" (UID: "f61efc97-8443-4381-85d3-7b6dd9e5b132"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.781792 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f61efc97-8443-4381-85d3-7b6dd9e5b132","Type":"ContainerDied","Data":"4aed4afefa764993ea70db03b2c400a45b1bda82bc035278f60d7644e864d334"} Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.782116 4786 scope.go:117] "RemoveContainer" containerID="b4d1a330526ee60509c7ca997f933fd2d6009b68ce24d2d289e4d812c0db1525" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.782248 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.784785 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5785d674c4-7cbc6" event={"ID":"b3b7375b-20bf-4c11-bbff-ae554685503a","Type":"ContainerStarted","Data":"f12b028c7e5ee2390737bbd9ea80cc40a6d7ca231a0a4244df9f2f91c209887b"} Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.796584 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-7ccfbc9bd6-76jl9" event={"ID":"2cec6ba1-ef2c-4cf9-881b-fdc57687c17c","Type":"ContainerStarted","Data":"42db88292095f34204c0ba3144dc785187679e88b7d4fbfd8359c36e0f0bbf20"} Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.797222 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.797552 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.807243 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.815022 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f61efc97-8443-4381-85d3-7b6dd9e5b132-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.822943 4786 generic.go:334] "Generic (PLEG): container finished" podID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerID="c9fecc3d360b8ad5eb49e52a978cb66c6b9c79037db24235b504d3d913096330" exitCode=0 Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.823027 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58674975df-v9bj8" event={"ID":"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3","Type":"ContainerDied","Data":"c9fecc3d360b8ad5eb49e52a978cb66c6b9c79037db24235b504d3d913096330"} Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.823071 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58674975df-v9bj8" event={"ID":"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3","Type":"ContainerStarted","Data":"41e993ea5cfe69ac4a19e13a624b7bdadbd6b55190c27fe16d1701ea3e81c978"} Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.894964 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-7ccfbc9bd6-76jl9" podStartSLOduration=4.894941176 podStartE2EDuration="4.894941176s" podCreationTimestamp="2025-12-09 09:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:02.847553829 +0000 UTC m=+1388.731175045" watchObservedRunningTime="2025-12-09 09:07:02.894941176 +0000 UTC m=+1388.778562402" Dec 09 09:07:02 crc kubenswrapper[4786]: I1209 09:07:02.952883 4786 scope.go:117] "RemoveContainer" containerID="28ffefa0be25980cd96ffd22439421c64f1ce69523a06e10f2a56403350dbbf3" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.041967 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.121604 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.153709 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:03 crc kubenswrapper[4786]: E1209 09:07:03.154920 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="proxy-httpd" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.155258 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="proxy-httpd" Dec 09 09:07:03 crc kubenswrapper[4786]: E1209 09:07:03.155359 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="ceilometer-central-agent" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.155522 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="ceilometer-central-agent" Dec 09 09:07:03 crc kubenswrapper[4786]: E1209 09:07:03.155672 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="ceilometer-notification-agent" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.155766 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="ceilometer-notification-agent" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.156173 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="proxy-httpd" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.156291 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="ceilometer-notification-agent" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.156380 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" containerName="ceilometer-central-agent" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.167237 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.176230 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.176655 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.250570 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f61efc97-8443-4381-85d3-7b6dd9e5b132" path="/var/lib/kubelet/pods/f61efc97-8443-4381-85d3-7b6dd9e5b132/volumes" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.251693 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.267281 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-log-httpd\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.267343 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-scripts\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.267368 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqwgb\" (UniqueName: \"kubernetes.io/projected/ed942767-e79e-40d4-ab0e-b47aff280e56-kube-api-access-nqwgb\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.267453 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-run-httpd\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.267483 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.267510 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-config-data\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.267529 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.370135 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-config-data\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.370224 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.370462 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-log-httpd\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.370497 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-scripts\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.370527 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqwgb\" (UniqueName: \"kubernetes.io/projected/ed942767-e79e-40d4-ab0e-b47aff280e56-kube-api-access-nqwgb\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.370683 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-run-httpd\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.370730 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.372509 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-log-httpd\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.374662 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-run-httpd\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.387893 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-config-data\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.388201 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-scripts\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.389599 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.391088 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.405444 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqwgb\" (UniqueName: \"kubernetes.io/projected/ed942767-e79e-40d4-ab0e-b47aff280e56-kube-api-access-nqwgb\") pod \"ceilometer-0\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.546211 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.905187 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5785d674c4-7cbc6" event={"ID":"b3b7375b-20bf-4c11-bbff-ae554685503a","Type":"ContainerStarted","Data":"ea41e04cd26025258ce2a8b265b03454e9027bb6dc29ecefe0ac7779d810df52"} Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.905895 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.906266 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:07:03 crc kubenswrapper[4786]: I1209 09:07:03.946730 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5785d674c4-7cbc6" podStartSLOduration=4.946701353 podStartE2EDuration="4.946701353s" podCreationTimestamp="2025-12-09 09:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:03.945802163 +0000 UTC m=+1389.829423409" watchObservedRunningTime="2025-12-09 09:07:03.946701353 +0000 UTC m=+1389.830322579" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.567945 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-69744ddc66-fp6bq"] Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.572807 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.577839 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.578002 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.583269 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69744ddc66-fp6bq"] Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.625966 4786 scope.go:117] "RemoveContainer" containerID="f505c423f86bd2d7755c8086eddc837313dd1a9d693a10fd132f29ab969f967a" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.707693 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-logs\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.708283 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-internal-tls-certs\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.708446 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-public-tls-certs\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.708586 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-config-data\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.708697 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-config-data-custom\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.708822 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-combined-ca-bundle\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.708924 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh55m\" (UniqueName: \"kubernetes.io/projected/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-kube-api-access-wh55m\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.811358 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-combined-ca-bundle\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.811479 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh55m\" (UniqueName: \"kubernetes.io/projected/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-kube-api-access-wh55m\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.811564 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-logs\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.811661 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-internal-tls-certs\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.811798 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-public-tls-certs\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.811854 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-config-data\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.811884 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-config-data-custom\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.812638 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-logs\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.820202 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-config-data-custom\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.820485 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-combined-ca-bundle\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.821073 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-public-tls-certs\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.822559 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-internal-tls-certs\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.834215 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-config-data\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.840301 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh55m\" (UniqueName: \"kubernetes.io/projected/be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53-kube-api-access-wh55m\") pod \"barbican-api-69744ddc66-fp6bq\" (UID: \"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53\") " pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:04 crc kubenswrapper[4786]: I1209 09:07:04.916762 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:06 crc kubenswrapper[4786]: I1209 09:07:06.153708 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:06 crc kubenswrapper[4786]: I1209 09:07:06.181447 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:06 crc kubenswrapper[4786]: I1209 09:07:06.953271 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:07 crc kubenswrapper[4786]: I1209 09:07:07.021319 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:07 crc kubenswrapper[4786]: W1209 09:07:07.322100 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poded942767_e79e_40d4_ab0e_b47aff280e56.slice/crio-56cb03be99c6153d2e0d95d226077b73796cc7cdc5d07472ed69123e94a10c13 WatchSource:0}: Error finding container 56cb03be99c6153d2e0d95d226077b73796cc7cdc5d07472ed69123e94a10c13: Status 404 returned error can't find the container with id 56cb03be99c6153d2e0d95d226077b73796cc7cdc5d07472ed69123e94a10c13 Dec 09 09:07:07 crc kubenswrapper[4786]: I1209 09:07:07.327553 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:07 crc kubenswrapper[4786]: W1209 09:07:07.418490 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbe7cb0a0_e8ca_4bb0_8be6_831d56ec8a53.slice/crio-fd4d5d1f667fa05a01d6b0c7037018266d655cb975c64f20f33b8262046bcf47 WatchSource:0}: Error finding container fd4d5d1f667fa05a01d6b0c7037018266d655cb975c64f20f33b8262046bcf47: Status 404 returned error can't find the container with id fd4d5d1f667fa05a01d6b0c7037018266d655cb975c64f20f33b8262046bcf47 Dec 09 09:07:07 crc kubenswrapper[4786]: I1209 09:07:07.418917 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-69744ddc66-fp6bq"] Dec 09 09:07:07 crc kubenswrapper[4786]: I1209 09:07:07.924063 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:07:07 crc kubenswrapper[4786]: I1209 09:07:07.924536 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:07:07 crc kubenswrapper[4786]: I1209 09:07:07.974275 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69744ddc66-fp6bq" event={"ID":"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53","Type":"ContainerStarted","Data":"fd4d5d1f667fa05a01d6b0c7037018266d655cb975c64f20f33b8262046bcf47"} Dec 09 09:07:07 crc kubenswrapper[4786]: I1209 09:07:07.977058 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerStarted","Data":"56cb03be99c6153d2e0d95d226077b73796cc7cdc5d07472ed69123e94a10c13"} Dec 09 09:07:08 crc kubenswrapper[4786]: I1209 09:07:08.973183 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkwnh" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="registry-server" probeResult="failure" output=< Dec 09 09:07:08 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 09:07:08 crc kubenswrapper[4786]: > Dec 09 09:07:08 crc kubenswrapper[4786]: I1209 09:07:08.988585 4786 generic.go:334] "Generic (PLEG): container finished" podID="8d930edc-ed97-418e-a47a-60f38b734a50" containerID="a560b8ce4c6949793bd95b3c67bc632fa0c176608dc24c790c22d5ee77043340" exitCode=1 Dec 09 09:07:08 crc kubenswrapper[4786]: I1209 09:07:08.988645 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8d930edc-ed97-418e-a47a-60f38b734a50","Type":"ContainerDied","Data":"a560b8ce4c6949793bd95b3c67bc632fa0c176608dc24c790c22d5ee77043340"} Dec 09 09:07:08 crc kubenswrapper[4786]: I1209 09:07:08.988680 4786 scope.go:117] "RemoveContainer" containerID="445493b9101c88a21404c9cea095363a6c1996992f5a837e0ec6f3c4561c0a24" Dec 09 09:07:08 crc kubenswrapper[4786]: I1209 09:07:08.989369 4786 scope.go:117] "RemoveContainer" containerID="a560b8ce4c6949793bd95b3c67bc632fa0c176608dc24c790c22d5ee77043340" Dec 09 09:07:08 crc kubenswrapper[4786]: E1209 09:07:08.989758 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(8d930edc-ed97-418e-a47a-60f38b734a50)\"" pod="openstack/watcher-decision-engine-0" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" Dec 09 09:07:10 crc kubenswrapper[4786]: I1209 09:07:10.087342 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58674975df-v9bj8" event={"ID":"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3","Type":"ContainerStarted","Data":"7b6e5a18d6b802ca8e30f1648c5887a6844c79e83461fcce814415a98636eb66"} Dec 09 09:07:10 crc kubenswrapper[4786]: I1209 09:07:10.088021 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:07:10 crc kubenswrapper[4786]: I1209 09:07:10.106965 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" event={"ID":"59cf76dd-ccdd-4aff-b6ae-a86c532b922c","Type":"ContainerStarted","Data":"b99722c6633869f10dd61e1d780fb65a928c66dd7496201aafd4e4f0e34e9d79"} Dec 09 09:07:10 crc kubenswrapper[4786]: I1209 09:07:10.256188 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58674975df-v9bj8" podStartSLOduration=12.256160483 podStartE2EDuration="12.256160483s" podCreationTimestamp="2025-12-09 09:06:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:10.254991327 +0000 UTC m=+1396.138612543" watchObservedRunningTime="2025-12-09 09:07:10.256160483 +0000 UTC m=+1396.139781709" Dec 09 09:07:10 crc kubenswrapper[4786]: I1209 09:07:10.258986 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76847f447c-24chp" event={"ID":"515247e9-4278-40fa-b971-adb499dc3ce0","Type":"ContainerStarted","Data":"4937a1b49b4870448ed618ac4cb1814287c31288ba6d5c6fb71c02a0b83dc8ea"} Dec 09 09:07:10 crc kubenswrapper[4786]: I1209 09:07:10.268335 4786 scope.go:117] "RemoveContainer" containerID="a560b8ce4c6949793bd95b3c67bc632fa0c176608dc24c790c22d5ee77043340" Dec 09 09:07:10 crc kubenswrapper[4786]: E1209 09:07:10.268713 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(8d930edc-ed97-418e-a47a-60f38b734a50)\"" pod="openstack/watcher-decision-engine-0" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" Dec 09 09:07:10 crc kubenswrapper[4786]: I1209 09:07:10.277120 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69744ddc66-fp6bq" event={"ID":"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53","Type":"ContainerStarted","Data":"8ab07c7193b46427bd5dd625983a28b59bb148adb7036dfa4f4c28d35a61daa0"} Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.295502 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-69744ddc66-fp6bq" event={"ID":"be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53","Type":"ContainerStarted","Data":"e0b9c7898164b7cfb7d2b599f5693de4948437694a3e5e89b7091a637f751d1e"} Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.296322 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.296354 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.299772 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" event={"ID":"59cf76dd-ccdd-4aff-b6ae-a86c532b922c","Type":"ContainerStarted","Data":"dd3a666aa46896dadab67dd0ecbde3a44a020dad46804a4c906c189589ddbe11"} Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.308466 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-76847f447c-24chp" event={"ID":"515247e9-4278-40fa-b971-adb499dc3ce0","Type":"ContainerStarted","Data":"cdfb87a6a4c82065d3cc0abd1cdfd8ad5c515aaf5adbfc147a9c7daa76d12f91"} Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.313486 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerStarted","Data":"f9b3db817f9a6f0a23a17815b42d0b650b8fb72c713dfcd4b59888f0fb7c08c7"} Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.313522 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerStarted","Data":"6e9b40d63696eb904a5ceffef24023ccf24c79b7731741fcb00d6eab5d09724d"} Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.336782 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-69744ddc66-fp6bq" podStartSLOduration=7.336746164 podStartE2EDuration="7.336746164s" podCreationTimestamp="2025-12-09 09:07:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:11.324150453 +0000 UTC m=+1397.207771679" watchObservedRunningTime="2025-12-09 09:07:11.336746164 +0000 UTC m=+1397.220367390" Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.356232 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-76847f447c-24chp" podStartSLOduration=7.373764612 podStartE2EDuration="13.356207648s" podCreationTimestamp="2025-12-09 09:06:58 +0000 UTC" firstStartedPulling="2025-12-09 09:07:00.598672902 +0000 UTC m=+1386.482294128" lastFinishedPulling="2025-12-09 09:07:06.581115938 +0000 UTC m=+1392.464737164" observedRunningTime="2025-12-09 09:07:11.349746393 +0000 UTC m=+1397.233367619" watchObservedRunningTime="2025-12-09 09:07:11.356207648 +0000 UTC m=+1397.239828874" Dec 09 09:07:11 crc kubenswrapper[4786]: I1209 09:07:11.376293 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-fc48d579d-jv5m2" podStartSLOduration=8.254256187 podStartE2EDuration="13.376270885s" podCreationTimestamp="2025-12-09 09:06:58 +0000 UTC" firstStartedPulling="2025-12-09 09:07:01.44962448 +0000 UTC m=+1387.333245706" lastFinishedPulling="2025-12-09 09:07:06.571639178 +0000 UTC m=+1392.455260404" observedRunningTime="2025-12-09 09:07:11.370625219 +0000 UTC m=+1397.254246455" watchObservedRunningTime="2025-12-09 09:07:11.376270885 +0000 UTC m=+1397.259892111" Dec 09 09:07:12 crc kubenswrapper[4786]: I1209 09:07:12.327997 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerStarted","Data":"b700ef67f4e393732db218ca53aeec390ac5e4b939658ca23c18f5e0c0f5477b"} Dec 09 09:07:13 crc kubenswrapper[4786]: I1209 09:07:13.614689 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:07:13 crc kubenswrapper[4786]: I1209 09:07:13.655504 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:07:13 crc kubenswrapper[4786]: I1209 09:07:13.927507 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-bc97f477f-xr7xf" Dec 09 09:07:14 crc kubenswrapper[4786]: I1209 09:07:14.430267 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerStarted","Data":"df8141895c8fa8895c7c7b8f42c713557a3595cc3050fd8795442d51ab285ed7"} Dec 09 09:07:14 crc kubenswrapper[4786]: I1209 09:07:14.430389 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 09:07:14 crc kubenswrapper[4786]: I1209 09:07:14.458973 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=6.6936695969999995 podStartE2EDuration="12.458946341s" podCreationTimestamp="2025-12-09 09:07:02 +0000 UTC" firstStartedPulling="2025-12-09 09:07:07.324335884 +0000 UTC m=+1393.207957100" lastFinishedPulling="2025-12-09 09:07:13.089612618 +0000 UTC m=+1398.973233844" observedRunningTime="2025-12-09 09:07:14.45170931 +0000 UTC m=+1400.335330536" watchObservedRunningTime="2025-12-09 09:07:14.458946341 +0000 UTC m=+1400.342567577" Dec 09 09:07:14 crc kubenswrapper[4786]: I1209 09:07:14.980253 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.089644 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db5d4bd9f-bjbhg"] Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.092233 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" podUID="7a3dff40-beee-4963-881e-647d277fcd7d" containerName="dnsmasq-dns" containerID="cri-o://f1cc373c6c1d34668a86285c4d2ae905bc8c340ba40b4b71ec2c37d51916242d" gracePeriod=10 Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.457810 4786 generic.go:334] "Generic (PLEG): container finished" podID="7a3dff40-beee-4963-881e-647d277fcd7d" containerID="f1cc373c6c1d34668a86285c4d2ae905bc8c340ba40b4b71ec2c37d51916242d" exitCode=0 Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.457962 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" event={"ID":"7a3dff40-beee-4963-881e-647d277fcd7d","Type":"ContainerDied","Data":"f1cc373c6c1d34668a86285c4d2ae905bc8c340ba40b4b71ec2c37d51916242d"} Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.675201 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.705352 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-svc\") pod \"7a3dff40-beee-4963-881e-647d277fcd7d\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.705534 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c57gz\" (UniqueName: \"kubernetes.io/projected/7a3dff40-beee-4963-881e-647d277fcd7d-kube-api-access-c57gz\") pod \"7a3dff40-beee-4963-881e-647d277fcd7d\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.705579 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-sb\") pod \"7a3dff40-beee-4963-881e-647d277fcd7d\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.705632 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-nb\") pod \"7a3dff40-beee-4963-881e-647d277fcd7d\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.705694 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-swift-storage-0\") pod \"7a3dff40-beee-4963-881e-647d277fcd7d\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.705730 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-config\") pod \"7a3dff40-beee-4963-881e-647d277fcd7d\" (UID: \"7a3dff40-beee-4963-881e-647d277fcd7d\") " Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.749607 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a3dff40-beee-4963-881e-647d277fcd7d-kube-api-access-c57gz" (OuterVolumeSpecName: "kube-api-access-c57gz") pod "7a3dff40-beee-4963-881e-647d277fcd7d" (UID: "7a3dff40-beee-4963-881e-647d277fcd7d"). InnerVolumeSpecName "kube-api-access-c57gz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.779059 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7a3dff40-beee-4963-881e-647d277fcd7d" (UID: "7a3dff40-beee-4963-881e-647d277fcd7d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.805815 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7a3dff40-beee-4963-881e-647d277fcd7d" (UID: "7a3dff40-beee-4963-881e-647d277fcd7d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.809893 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.809922 4786 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.809933 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c57gz\" (UniqueName: \"kubernetes.io/projected/7a3dff40-beee-4963-881e-647d277fcd7d-kube-api-access-c57gz\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.810958 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-config" (OuterVolumeSpecName: "config") pod "7a3dff40-beee-4963-881e-647d277fcd7d" (UID: "7a3dff40-beee-4963-881e-647d277fcd7d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.817297 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7a3dff40-beee-4963-881e-647d277fcd7d" (UID: "7a3dff40-beee-4963-881e-647d277fcd7d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.828722 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7a3dff40-beee-4963-881e-647d277fcd7d" (UID: "7a3dff40-beee-4963-881e-647d277fcd7d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.910883 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.911165 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:15 crc kubenswrapper[4786]: I1209 09:07:15.911294 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7a3dff40-beee-4963-881e-647d277fcd7d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:16 crc kubenswrapper[4786]: I1209 09:07:16.180378 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:16 crc kubenswrapper[4786]: I1209 09:07:16.488504 4786 scope.go:117] "RemoveContainer" containerID="a560b8ce4c6949793bd95b3c67bc632fa0c176608dc24c790c22d5ee77043340" Dec 09 09:07:16 crc kubenswrapper[4786]: E1209 09:07:16.488875 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(8d930edc-ed97-418e-a47a-60f38b734a50)\"" pod="openstack/watcher-decision-engine-0" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" Dec 09 09:07:16 crc kubenswrapper[4786]: I1209 09:07:16.541685 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" event={"ID":"7a3dff40-beee-4963-881e-647d277fcd7d","Type":"ContainerDied","Data":"84fdab36e00dcdeada6688183983a930ea4077c71fd0e2e040db60225ededc99"} Dec 09 09:07:16 crc kubenswrapper[4786]: I1209 09:07:16.542228 4786 scope.go:117] "RemoveContainer" containerID="f1cc373c6c1d34668a86285c4d2ae905bc8c340ba40b4b71ec2c37d51916242d" Dec 09 09:07:16 crc kubenswrapper[4786]: I1209 09:07:16.542576 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5db5d4bd9f-bjbhg" Dec 09 09:07:16 crc kubenswrapper[4786]: I1209 09:07:16.634758 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5db5d4bd9f-bjbhg"] Dec 09 09:07:16 crc kubenswrapper[4786]: I1209 09:07:16.645860 4786 scope.go:117] "RemoveContainer" containerID="74b34b47961f75c9ec53cd229e881c0c45f9755755232504c7033bdafeb11f3e" Dec 09 09:07:16 crc kubenswrapper[4786]: I1209 09:07:16.667879 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5db5d4bd9f-bjbhg"] Dec 09 09:07:16 crc kubenswrapper[4786]: E1209 09:07:16.849454 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a3dff40_beee_4963_881e_647d277fcd7d.slice/crio-84fdab36e00dcdeada6688183983a930ea4077c71fd0e2e040db60225ededc99\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a3dff40_beee_4963_881e_647d277fcd7d.slice\": RecentStats: unable to find data in memory cache]" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.077528 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 09 09:07:17 crc kubenswrapper[4786]: E1209 09:07:17.077965 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a3dff40-beee-4963-881e-647d277fcd7d" containerName="dnsmasq-dns" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.077983 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a3dff40-beee-4963-881e-647d277fcd7d" containerName="dnsmasq-dns" Dec 09 09:07:17 crc kubenswrapper[4786]: E1209 09:07:17.078009 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a3dff40-beee-4963-881e-647d277fcd7d" containerName="init" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.078017 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a3dff40-beee-4963-881e-647d277fcd7d" containerName="init" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.078217 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a3dff40-beee-4963-881e-647d277fcd7d" containerName="dnsmasq-dns" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.078905 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.082137 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.082296 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.097012 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.100104 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-6m2px" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.194770 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nq76t\" (UniqueName: \"kubernetes.io/projected/73c988f1-a56c-4a07-8e72-0e928796bbd4-kube-api-access-nq76t\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.194859 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.194985 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config-secret\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.195024 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.203365 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a3dff40-beee-4963-881e-647d277fcd7d" path="/var/lib/kubelet/pods/7a3dff40-beee-4963-881e-647d277fcd7d/volumes" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.297265 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config-secret\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.297317 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.297356 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nq76t\" (UniqueName: \"kubernetes.io/projected/73c988f1-a56c-4a07-8e72-0e928796bbd4-kube-api-access-nq76t\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.297486 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.300059 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.303931 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config-secret\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.320014 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-combined-ca-bundle\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.327924 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nq76t\" (UniqueName: \"kubernetes.io/projected/73c988f1-a56c-4a07-8e72-0e928796bbd4-kube-api-access-nq76t\") pod \"openstackclient\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.401682 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.505176 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.513391 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.541616 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.543843 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.570293 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 09:07:17 crc kubenswrapper[4786]: E1209 09:07:17.705898 4786 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 09 09:07:17 crc kubenswrapper[4786]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_73c988f1-a56c-4a07-8e72-0e928796bbd4_0(f88e95ac58fcc7f45f6c844fcc558e3bd6a716e7fea5d6b3b22100448da88b52): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f88e95ac58fcc7f45f6c844fcc558e3bd6a716e7fea5d6b3b22100448da88b52" Netns:"/var/run/netns/662b4c7d-ceeb-4158-bdb6-b7581290c5df" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f88e95ac58fcc7f45f6c844fcc558e3bd6a716e7fea5d6b3b22100448da88b52;K8S_POD_UID=73c988f1-a56c-4a07-8e72-0e928796bbd4" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/73c988f1-a56c-4a07-8e72-0e928796bbd4]: expected pod UID "73c988f1-a56c-4a07-8e72-0e928796bbd4" but got "2423b332-8b9b-4a26-996b-582194eca3b7" from Kube API Dec 09 09:07:17 crc kubenswrapper[4786]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 09:07:17 crc kubenswrapper[4786]: > Dec 09 09:07:17 crc kubenswrapper[4786]: E1209 09:07:17.706022 4786 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 09 09:07:17 crc kubenswrapper[4786]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_openstackclient_openstack_73c988f1-a56c-4a07-8e72-0e928796bbd4_0(f88e95ac58fcc7f45f6c844fcc558e3bd6a716e7fea5d6b3b22100448da88b52): error adding pod openstack_openstackclient to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f88e95ac58fcc7f45f6c844fcc558e3bd6a716e7fea5d6b3b22100448da88b52" Netns:"/var/run/netns/662b4c7d-ceeb-4158-bdb6-b7581290c5df" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openstack;K8S_POD_NAME=openstackclient;K8S_POD_INFRA_CONTAINER_ID=f88e95ac58fcc7f45f6c844fcc558e3bd6a716e7fea5d6b3b22100448da88b52;K8S_POD_UID=73c988f1-a56c-4a07-8e72-0e928796bbd4" Path:"" ERRORED: error configuring pod [openstack/openstackclient] networking: Multus: [openstack/openstackclient/73c988f1-a56c-4a07-8e72-0e928796bbd4]: expected pod UID "73c988f1-a56c-4a07-8e72-0e928796bbd4" but got "2423b332-8b9b-4a26-996b-582194eca3b7" from Kube API Dec 09 09:07:17 crc kubenswrapper[4786]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 09 09:07:17 crc kubenswrapper[4786]: > pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.707371 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2423b332-8b9b-4a26-996b-582194eca3b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.707495 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2423b332-8b9b-4a26-996b-582194eca3b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.927235 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2423b332-8b9b-4a26-996b-582194eca3b7-openstack-config\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:17 crc kubenswrapper[4786]: I1209 09:07:17.927631 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdrdj\" (UniqueName: \"kubernetes.io/projected/2423b332-8b9b-4a26-996b-582194eca3b7-kube-api-access-qdrdj\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.031950 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2423b332-8b9b-4a26-996b-582194eca3b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.032024 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2423b332-8b9b-4a26-996b-582194eca3b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.032080 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2423b332-8b9b-4a26-996b-582194eca3b7-openstack-config\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.032168 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdrdj\" (UniqueName: \"kubernetes.io/projected/2423b332-8b9b-4a26-996b-582194eca3b7-kube-api-access-qdrdj\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.034165 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/2423b332-8b9b-4a26-996b-582194eca3b7-openstack-config\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.069498 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/2423b332-8b9b-4a26-996b-582194eca3b7-openstack-config-secret\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.069635 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2423b332-8b9b-4a26-996b-582194eca3b7-combined-ca-bundle\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.073468 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdrdj\" (UniqueName: \"kubernetes.io/projected/2423b332-8b9b-4a26-996b-582194eca3b7-kube-api-access-qdrdj\") pod \"openstackclient\" (UID: \"2423b332-8b9b-4a26-996b-582194eca3b7\") " pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.173949 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.582156 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.589180 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="73c988f1-a56c-4a07-8e72-0e928796bbd4" podUID="2423b332-8b9b-4a26-996b-582194eca3b7" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.598679 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.725037 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.753156 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nq76t\" (UniqueName: \"kubernetes.io/projected/73c988f1-a56c-4a07-8e72-0e928796bbd4-kube-api-access-nq76t\") pod \"73c988f1-a56c-4a07-8e72-0e928796bbd4\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.753543 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config-secret\") pod \"73c988f1-a56c-4a07-8e72-0e928796bbd4\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.753575 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-combined-ca-bundle\") pod \"73c988f1-a56c-4a07-8e72-0e928796bbd4\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.753631 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config\") pod \"73c988f1-a56c-4a07-8e72-0e928796bbd4\" (UID: \"73c988f1-a56c-4a07-8e72-0e928796bbd4\") " Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.754949 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "73c988f1-a56c-4a07-8e72-0e928796bbd4" (UID: "73c988f1-a56c-4a07-8e72-0e928796bbd4"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.766563 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "73c988f1-a56c-4a07-8e72-0e928796bbd4" (UID: "73c988f1-a56c-4a07-8e72-0e928796bbd4"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.766944 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c988f1-a56c-4a07-8e72-0e928796bbd4-kube-api-access-nq76t" (OuterVolumeSpecName: "kube-api-access-nq76t") pod "73c988f1-a56c-4a07-8e72-0e928796bbd4" (UID: "73c988f1-a56c-4a07-8e72-0e928796bbd4"). InnerVolumeSpecName "kube-api-access-nq76t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.771663 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "73c988f1-a56c-4a07-8e72-0e928796bbd4" (UID: "73c988f1-a56c-4a07-8e72-0e928796bbd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.858960 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nq76t\" (UniqueName: \"kubernetes.io/projected/73c988f1-a56c-4a07-8e72-0e928796bbd4-kube-api-access-nq76t\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.859004 4786 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.859022 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c988f1-a56c-4a07-8e72-0e928796bbd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:18 crc kubenswrapper[4786]: I1209 09:07:18.859035 4786 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/73c988f1-a56c-4a07-8e72-0e928796bbd4-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:19 crc kubenswrapper[4786]: I1209 09:07:19.013624 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkwnh" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="registry-server" probeResult="failure" output=< Dec 09 09:07:19 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 09:07:19 crc kubenswrapper[4786]: > Dec 09 09:07:19 crc kubenswrapper[4786]: I1209 09:07:19.043994 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:19 crc kubenswrapper[4786]: I1209 09:07:19.382302 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c988f1-a56c-4a07-8e72-0e928796bbd4" path="/var/lib/kubelet/pods/73c988f1-a56c-4a07-8e72-0e928796bbd4/volumes" Dec 09 09:07:19 crc kubenswrapper[4786]: I1209 09:07:19.605856 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2423b332-8b9b-4a26-996b-582194eca3b7","Type":"ContainerStarted","Data":"7d46d9bc35af59921505c2d2eee5a03b1cf27dd8465d156910a3adc19a1be51b"} Dec 09 09:07:19 crc kubenswrapper[4786]: I1209 09:07:19.605902 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 09 09:07:19 crc kubenswrapper[4786]: I1209 09:07:19.620486 4786 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openstack/openstackclient" oldPodUID="73c988f1-a56c-4a07-8e72-0e928796bbd4" podUID="2423b332-8b9b-4a26-996b-582194eca3b7" Dec 09 09:07:21 crc kubenswrapper[4786]: I1209 09:07:21.989650 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-69744ddc66-fp6bq" Dec 09 09:07:22 crc kubenswrapper[4786]: I1209 09:07:22.070389 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5785d674c4-7cbc6"] Dec 09 09:07:22 crc kubenswrapper[4786]: I1209 09:07:22.071223 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5785d674c4-7cbc6" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api" containerID="cri-o://ea41e04cd26025258ce2a8b265b03454e9027bb6dc29ecefe0ac7779d810df52" gracePeriod=30 Dec 09 09:07:22 crc kubenswrapper[4786]: I1209 09:07:22.071460 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5785d674c4-7cbc6" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api-log" containerID="cri-o://f12b028c7e5ee2390737bbd9ea80cc40a6d7ca231a0a4244df9f2f91c209887b" gracePeriod=30 Dec 09 09:07:23 crc kubenswrapper[4786]: I1209 09:07:23.013373 4786 generic.go:334] "Generic (PLEG): container finished" podID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerID="f12b028c7e5ee2390737bbd9ea80cc40a6d7ca231a0a4244df9f2f91c209887b" exitCode=143 Dec 09 09:07:23 crc kubenswrapper[4786]: I1209 09:07:23.013484 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5785d674c4-7cbc6" event={"ID":"b3b7375b-20bf-4c11-bbff-ae554685503a","Type":"ContainerDied","Data":"f12b028c7e5ee2390737bbd9ea80cc40a6d7ca231a0a4244df9f2f91c209887b"} Dec 09 09:07:23 crc kubenswrapper[4786]: I1209 09:07:23.016749 4786 generic.go:334] "Generic (PLEG): container finished" podID="8c28b549-bfff-47f7-b262-c3203bd88cb1" containerID="d9a2cc35960764204adb4db837e952174a5c44528e376a8416466db2145ebe1c" exitCode=0 Dec 09 09:07:23 crc kubenswrapper[4786]: I1209 09:07:23.016787 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ddvk" event={"ID":"8c28b549-bfff-47f7-b262-c3203bd88cb1","Type":"ContainerDied","Data":"d9a2cc35960764204adb4db837e952174a5c44528e376a8416466db2145ebe1c"} Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.662950 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.827345 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-combined-ca-bundle\") pod \"8c28b549-bfff-47f7-b262-c3203bd88cb1\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.827472 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-db-sync-config-data\") pod \"8c28b549-bfff-47f7-b262-c3203bd88cb1\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.827605 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-config-data\") pod \"8c28b549-bfff-47f7-b262-c3203bd88cb1\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.827657 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kz2nh\" (UniqueName: \"kubernetes.io/projected/8c28b549-bfff-47f7-b262-c3203bd88cb1-kube-api-access-kz2nh\") pod \"8c28b549-bfff-47f7-b262-c3203bd88cb1\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.827740 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c28b549-bfff-47f7-b262-c3203bd88cb1-etc-machine-id\") pod \"8c28b549-bfff-47f7-b262-c3203bd88cb1\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.827787 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-scripts\") pod \"8c28b549-bfff-47f7-b262-c3203bd88cb1\" (UID: \"8c28b549-bfff-47f7-b262-c3203bd88cb1\") " Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.828344 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8c28b549-bfff-47f7-b262-c3203bd88cb1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8c28b549-bfff-47f7-b262-c3203bd88cb1" (UID: "8c28b549-bfff-47f7-b262-c3203bd88cb1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.836701 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "8c28b549-bfff-47f7-b262-c3203bd88cb1" (UID: "8c28b549-bfff-47f7-b262-c3203bd88cb1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.836907 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-scripts" (OuterVolumeSpecName: "scripts") pod "8c28b549-bfff-47f7-b262-c3203bd88cb1" (UID: "8c28b549-bfff-47f7-b262-c3203bd88cb1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.847904 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c28b549-bfff-47f7-b262-c3203bd88cb1-kube-api-access-kz2nh" (OuterVolumeSpecName: "kube-api-access-kz2nh") pod "8c28b549-bfff-47f7-b262-c3203bd88cb1" (UID: "8c28b549-bfff-47f7-b262-c3203bd88cb1"). InnerVolumeSpecName "kube-api-access-kz2nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.879290 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c28b549-bfff-47f7-b262-c3203bd88cb1" (UID: "8c28b549-bfff-47f7-b262-c3203bd88cb1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.929968 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kz2nh\" (UniqueName: \"kubernetes.io/projected/8c28b549-bfff-47f7-b262-c3203bd88cb1-kube-api-access-kz2nh\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.930019 4786 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8c28b549-bfff-47f7-b262-c3203bd88cb1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.930033 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.930045 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.930057 4786 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.954207 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-config-data" (OuterVolumeSpecName: "config-data") pod "8c28b549-bfff-47f7-b262-c3203bd88cb1" (UID: "8c28b549-bfff-47f7-b262-c3203bd88cb1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.990160 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:07:24 crc kubenswrapper[4786]: I1209 09:07:24.990262 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.033065 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c28b549-bfff-47f7-b262-c3203bd88cb1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.070631 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-9ddvk" event={"ID":"8c28b549-bfff-47f7-b262-c3203bd88cb1","Type":"ContainerDied","Data":"791444c398922b7702158ca0b7a6260a2420c6d680e1e82328689a9c295117f0"} Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.070686 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791444c398922b7702158ca0b7a6260a2420c6d680e1e82328689a9c295117f0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.070801 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-9ddvk" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.697597 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 09:07:25 crc kubenswrapper[4786]: E1209 09:07:25.698360 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c28b549-bfff-47f7-b262-c3203bd88cb1" containerName="cinder-db-sync" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.698379 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c28b549-bfff-47f7-b262-c3203bd88cb1" containerName="cinder-db-sync" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.698690 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c28b549-bfff-47f7-b262-c3203bd88cb1" containerName="cinder-db-sync" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.700336 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.714595 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.715393 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.715629 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hjqlg" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.715819 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.722632 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.798686 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.798815 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.798937 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.798989 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9b641af-aaf9-493f-b738-89b62abb3e95-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.799044 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwbcv\" (UniqueName: \"kubernetes.io/projected/b9b641af-aaf9-493f-b738-89b62abb3e95-kube-api-access-zwbcv\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.799076 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-scripts\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.804844 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf5d89b49-4txgz"] Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.813055 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.831452 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf5d89b49-4txgz"] Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901263 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9b641af-aaf9-493f-b738-89b62abb3e95-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901352 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwbcv\" (UniqueName: \"kubernetes.io/projected/b9b641af-aaf9-493f-b738-89b62abb3e95-kube-api-access-zwbcv\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901393 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-scripts\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901465 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-swift-storage-0\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901557 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901583 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901611 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms8c6\" (UniqueName: \"kubernetes.io/projected/4dead0f7-489a-4677-afed-bcb93a525277-kube-api-access-ms8c6\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901638 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-config\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901675 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901715 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901734 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-svc\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.901797 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.902179 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9b641af-aaf9-493f-b738-89b62abb3e95-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.911959 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.913458 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.914926 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-scripts\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.915744 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:25 crc kubenswrapper[4786]: I1209 09:07:25.923181 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwbcv\" (UniqueName: \"kubernetes.io/projected/b9b641af-aaf9-493f-b738-89b62abb3e95-kube-api-access-zwbcv\") pod \"cinder-scheduler-0\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.002572 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.002646 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms8c6\" (UniqueName: \"kubernetes.io/projected/4dead0f7-489a-4677-afed-bcb93a525277-kube-api-access-ms8c6\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.002678 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-config\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.002715 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.002748 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-svc\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.002831 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-swift-storage-0\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.003682 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.004284 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-swift-storage-0\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.004294 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-sb\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.004351 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-svc\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.004606 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-config\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.027314 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms8c6\" (UniqueName: \"kubernetes.io/projected/4dead0f7-489a-4677-afed-bcb93a525277-kube-api-access-ms8c6\") pod \"dnsmasq-dns-5bf5d89b49-4txgz\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.028190 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.137680 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.155677 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.155764 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.157055 4786 scope.go:117] "RemoveContainer" containerID="a560b8ce4c6949793bd95b3c67bc632fa0c176608dc24c790c22d5ee77043340" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.298515 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.300756 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.321348 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.338359 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.428755 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.428904 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.428965 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-scripts\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.429314 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e17544-6307-4a46-b381-34744a99cbb5-logs\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.429638 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3e17544-6307-4a46-b381-34744a99cbb5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.429809 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wg82p\" (UniqueName: \"kubernetes.io/projected/e3e17544-6307-4a46-b381-34744a99cbb5-kube-api-access-wg82p\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.429913 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.537361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wg82p\" (UniqueName: \"kubernetes.io/projected/e3e17544-6307-4a46-b381-34744a99cbb5-kube-api-access-wg82p\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.537521 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.538557 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.538606 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.538716 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-scripts\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.538833 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e17544-6307-4a46-b381-34744a99cbb5-logs\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.551257 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3e17544-6307-4a46-b381-34744a99cbb5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.551702 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3e17544-6307-4a46-b381-34744a99cbb5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.552237 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e17544-6307-4a46-b381-34744a99cbb5-logs\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.562046 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.564898 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.577392 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data-custom\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.596272 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-scripts\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.613042 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wg82p\" (UniqueName: \"kubernetes.io/projected/e3e17544-6307-4a46-b381-34744a99cbb5-kube-api-access-wg82p\") pod \"cinder-api-0\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " pod="openstack/cinder-api-0" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.623293 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5785d674c4-7cbc6" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": read tcp 10.217.0.2:43586->10.217.0.169:9311: read: connection reset by peer" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.627215 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5785d674c4-7cbc6" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": read tcp 10.217.0.2:43602->10.217.0.169:9311: read: connection reset by peer" Dec 09 09:07:26 crc kubenswrapper[4786]: I1209 09:07:26.952851 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.159210 4786 generic.go:334] "Generic (PLEG): container finished" podID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerID="ea41e04cd26025258ce2a8b265b03454e9027bb6dc29ecefe0ac7779d810df52" exitCode=0 Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.159528 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5785d674c4-7cbc6" event={"ID":"b3b7375b-20bf-4c11-bbff-ae554685503a","Type":"ContainerDied","Data":"ea41e04cd26025258ce2a8b265b03454e9027bb6dc29ecefe0ac7779d810df52"} Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.177537 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.372850 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8d930edc-ed97-418e-a47a-60f38b734a50","Type":"ContainerStarted","Data":"72bf17d8a5e951a264fb49cd4989b4c17935f80254bfac85b2b6f5cfd404a42c"} Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.485002 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf5d89b49-4txgz"] Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.781241 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.832535 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data-custom\") pod \"b3b7375b-20bf-4c11-bbff-ae554685503a\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.832917 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data\") pod \"b3b7375b-20bf-4c11-bbff-ae554685503a\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.833025 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-combined-ca-bundle\") pod \"b3b7375b-20bf-4c11-bbff-ae554685503a\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.833144 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfhd5\" (UniqueName: \"kubernetes.io/projected/b3b7375b-20bf-4c11-bbff-ae554685503a-kube-api-access-zfhd5\") pod \"b3b7375b-20bf-4c11-bbff-ae554685503a\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.833199 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3b7375b-20bf-4c11-bbff-ae554685503a-logs\") pod \"b3b7375b-20bf-4c11-bbff-ae554685503a\" (UID: \"b3b7375b-20bf-4c11-bbff-ae554685503a\") " Dec 09 09:07:27 crc kubenswrapper[4786]: E1209 09:07:27.839794 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb3b7375b_20bf_4c11_bbff_ae554685503a.slice/crio-conmon-ea41e04cd26025258ce2a8b265b03454e9027bb6dc29ecefe0ac7779d810df52.scope\": RecentStats: unable to find data in memory cache]" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.840650 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b3b7375b-20bf-4c11-bbff-ae554685503a" (UID: "b3b7375b-20bf-4c11-bbff-ae554685503a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.848374 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3b7375b-20bf-4c11-bbff-ae554685503a-kube-api-access-zfhd5" (OuterVolumeSpecName: "kube-api-access-zfhd5") pod "b3b7375b-20bf-4c11-bbff-ae554685503a" (UID: "b3b7375b-20bf-4c11-bbff-ae554685503a"). InnerVolumeSpecName "kube-api-access-zfhd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.848647 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3b7375b-20bf-4c11-bbff-ae554685503a-logs" (OuterVolumeSpecName: "logs") pod "b3b7375b-20bf-4c11-bbff-ae554685503a" (UID: "b3b7375b-20bf-4c11-bbff-ae554685503a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.920917 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.938311 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b3b7375b-20bf-4c11-bbff-ae554685503a" (UID: "b3b7375b-20bf-4c11-bbff-ae554685503a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.940485 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.940519 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfhd5\" (UniqueName: \"kubernetes.io/projected/b3b7375b-20bf-4c11-bbff-ae554685503a-kube-api-access-zfhd5\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.940536 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b3b7375b-20bf-4c11-bbff-ae554685503a-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.940551 4786 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:27 crc kubenswrapper[4786]: I1209 09:07:27.952299 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data" (OuterVolumeSpecName: "config-data") pod "b3b7375b-20bf-4c11-bbff-ae554685503a" (UID: "b3b7375b-20bf-4c11-bbff-ae554685503a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:27 crc kubenswrapper[4786]: W1209 09:07:27.974612 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3e17544_6307_4a46_b381_34744a99cbb5.slice/crio-553fa0b970bfcc8bc1487a81b12c1fd7b1d902687c8e072d92e8743002e629ba WatchSource:0}: Error finding container 553fa0b970bfcc8bc1487a81b12c1fd7b1d902687c8e072d92e8743002e629ba: Status 404 returned error can't find the container with id 553fa0b970bfcc8bc1487a81b12c1fd7b1d902687c8e072d92e8743002e629ba Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.022199 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.043263 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b3b7375b-20bf-4c11-bbff-ae554685503a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.105514 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.223332 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b9b641af-aaf9-493f-b738-89b62abb3e95","Type":"ContainerStarted","Data":"32d68de9a99a370e44035f4235764d1f574fde522d667f61bbf3b97138ceb2ac"} Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.225105 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3e17544-6307-4a46-b381-34744a99cbb5","Type":"ContainerStarted","Data":"553fa0b970bfcc8bc1487a81b12c1fd7b1d902687c8e072d92e8743002e629ba"} Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.234761 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5785d674c4-7cbc6" event={"ID":"b3b7375b-20bf-4c11-bbff-ae554685503a","Type":"ContainerDied","Data":"9ee333f2ffce2ff50fc51681f31a60e2d89a893e0e7a3b60ca1c38f8ad361dde"} Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.234835 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5785d674c4-7cbc6" Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.234873 4786 scope.go:117] "RemoveContainer" containerID="ea41e04cd26025258ce2a8b265b03454e9027bb6dc29ecefe0ac7779d810df52" Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.237593 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" event={"ID":"4dead0f7-489a-4677-afed-bcb93a525277","Type":"ContainerStarted","Data":"2f3866d76857c12f46997f4f5f6f6ce111eac90cf494e2e8fb4181e6189f02a0"} Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.472507 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5785d674c4-7cbc6"] Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.480827 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5785d674c4-7cbc6"] Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.501770 4786 scope.go:117] "RemoveContainer" containerID="f12b028c7e5ee2390737bbd9ea80cc40a6d7ca231a0a4244df9f2f91c209887b" Dec 09 09:07:28 crc kubenswrapper[4786]: I1209 09:07:28.851635 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkwnh"] Dec 09 09:07:29 crc kubenswrapper[4786]: I1209 09:07:29.226310 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" path="/var/lib/kubelet/pods/b3b7375b-20bf-4c11-bbff-ae554685503a/volumes" Dec 09 09:07:29 crc kubenswrapper[4786]: I1209 09:07:29.312689 4786 generic.go:334] "Generic (PLEG): container finished" podID="4dead0f7-489a-4677-afed-bcb93a525277" containerID="4bf3094ca8d377846c752bc687421ae11e58dd722d8518f1f53cd0e922efdcc1" exitCode=0 Dec 09 09:07:29 crc kubenswrapper[4786]: I1209 09:07:29.312787 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" event={"ID":"4dead0f7-489a-4677-afed-bcb93a525277","Type":"ContainerDied","Data":"4bf3094ca8d377846c752bc687421ae11e58dd722d8518f1f53cd0e922efdcc1"} Dec 09 09:07:29 crc kubenswrapper[4786]: I1209 09:07:29.325396 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lkwnh" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="registry-server" containerID="cri-o://ee190bcaee5b63121750529bb4b7d7c113572b7fd7d83479c45ee5897fd544a7" gracePeriod=2 Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.124196 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.497548 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.511857 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b9b641af-aaf9-493f-b738-89b62abb3e95","Type":"ContainerStarted","Data":"d14557c51a504282fd02d7c000ab7967a88368bd9a1f363820ea87544ec58507"} Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.536787 4786 generic.go:334] "Generic (PLEG): container finished" podID="656e7256-97a8-4036-b7b5-62c66bf06129" containerID="ee190bcaee5b63121750529bb4b7d7c113572b7fd7d83479c45ee5897fd544a7" exitCode=0 Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.536837 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkwnh" event={"ID":"656e7256-97a8-4036-b7b5-62c66bf06129","Type":"ContainerDied","Data":"ee190bcaee5b63121750529bb4b7d7c113572b7fd7d83479c45ee5897fd544a7"} Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.555684 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" podStartSLOduration=5.555661166 podStartE2EDuration="5.555661166s" podCreationTimestamp="2025-12-09 09:07:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:30.554309882 +0000 UTC m=+1416.437931118" watchObservedRunningTime="2025-12-09 09:07:30.555661166 +0000 UTC m=+1416.439282392" Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.808172 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.915668 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4vhm\" (UniqueName: \"kubernetes.io/projected/656e7256-97a8-4036-b7b5-62c66bf06129-kube-api-access-t4vhm\") pod \"656e7256-97a8-4036-b7b5-62c66bf06129\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.915861 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-utilities\") pod \"656e7256-97a8-4036-b7b5-62c66bf06129\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.916014 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-catalog-content\") pod \"656e7256-97a8-4036-b7b5-62c66bf06129\" (UID: \"656e7256-97a8-4036-b7b5-62c66bf06129\") " Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.917561 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-utilities" (OuterVolumeSpecName: "utilities") pod "656e7256-97a8-4036-b7b5-62c66bf06129" (UID: "656e7256-97a8-4036-b7b5-62c66bf06129"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:30 crc kubenswrapper[4786]: I1209 09:07:30.920977 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/656e7256-97a8-4036-b7b5-62c66bf06129-kube-api-access-t4vhm" (OuterVolumeSpecName: "kube-api-access-t4vhm") pod "656e7256-97a8-4036-b7b5-62c66bf06129" (UID: "656e7256-97a8-4036-b7b5-62c66bf06129"). InnerVolumeSpecName "kube-api-access-t4vhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.018754 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4vhm\" (UniqueName: \"kubernetes.io/projected/656e7256-97a8-4036-b7b5-62c66bf06129-kube-api-access-t4vhm\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.018798 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.073659 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "656e7256-97a8-4036-b7b5-62c66bf06129" (UID: "656e7256-97a8-4036-b7b5-62c66bf06129"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.121285 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/656e7256-97a8-4036-b7b5-62c66bf06129-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.602041 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3e17544-6307-4a46-b381-34744a99cbb5","Type":"ContainerStarted","Data":"52bbfbd410bb33df616342360075e577b9c18a627484ef75577a23a4347a7ebd"} Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.605280 4786 generic.go:334] "Generic (PLEG): container finished" podID="4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" containerID="078da1a366c6f949783c1769fdafd18365b94f3cca28caf9b0b6b300905306da" exitCode=0 Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.605407 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6m994" event={"ID":"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c","Type":"ContainerDied","Data":"078da1a366c6f949783c1769fdafd18365b94f3cca28caf9b0b6b300905306da"} Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.625906 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkwnh" event={"ID":"656e7256-97a8-4036-b7b5-62c66bf06129","Type":"ContainerDied","Data":"f3df0a9dee6f29339aacd6ebe0c819f5305e7c1cd9afaf1b8f239ed634fb9f84"} Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.625990 4786 scope.go:117] "RemoveContainer" containerID="ee190bcaee5b63121750529bb4b7d7c113572b7fd7d83479c45ee5897fd544a7" Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.626262 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkwnh" Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.649658 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" event={"ID":"4dead0f7-489a-4677-afed-bcb93a525277","Type":"ContainerStarted","Data":"dd289809c662109ed9bfb3ab9eb04fadc3f98668cc99d8a8a4ece7482318a892"} Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.702732 4786 scope.go:117] "RemoveContainer" containerID="3a5e59e31eeea971b14a496ed30c481cb2ed4ad45a7db32403d50b6a81fa628d" Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.717509 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkwnh"] Dec 09 09:07:31 crc kubenswrapper[4786]: I1209 09:07:31.747985 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lkwnh"] Dec 09 09:07:32 crc kubenswrapper[4786]: I1209 09:07:32.033133 4786 scope.go:117] "RemoveContainer" containerID="0b4de3677cf0c7237a2e571c18a4a548e87acc9675703432ec92e12110e9b3eb" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.011133 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b9b641af-aaf9-493f-b738-89b62abb3e95","Type":"ContainerStarted","Data":"328475f5d527b288ab9ea75bdeae456b4281611e024661ac1157e83f377af00e"} Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.034822 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" containerName="cinder-api-log" containerID="cri-o://52bbfbd410bb33df616342360075e577b9c18a627484ef75577a23a4347a7ebd" gracePeriod=30 Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.035361 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3e17544-6307-4a46-b381-34744a99cbb5","Type":"ContainerStarted","Data":"3e67b9c8da3783a6be6596bf5465e26fbef0a2c25e297dee79a95dade6bb6811"} Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.035526 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" containerName="cinder-api" containerID="cri-o://3e67b9c8da3783a6be6596bf5465e26fbef0a2c25e297dee79a95dade6bb6811" gracePeriod=30 Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.037497 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.075798 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.396934577 podStartE2EDuration="8.075765748s" podCreationTimestamp="2025-12-09 09:07:25 +0000 UTC" firstStartedPulling="2025-12-09 09:07:27.320464527 +0000 UTC m=+1413.204085753" lastFinishedPulling="2025-12-09 09:07:27.999295698 +0000 UTC m=+1413.882916924" observedRunningTime="2025-12-09 09:07:33.060017255 +0000 UTC m=+1418.943638481" watchObservedRunningTime="2025-12-09 09:07:33.075765748 +0000 UTC m=+1418.959386974" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.091465 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.091416319 podStartE2EDuration="7.091416319s" podCreationTimestamp="2025-12-09 09:07:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:33.086089277 +0000 UTC m=+1418.969710503" watchObservedRunningTime="2025-12-09 09:07:33.091416319 +0000 UTC m=+1418.975037545" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.131965 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.171488 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-7ccfbc9bd6-76jl9" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.230216 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" path="/var/lib/kubelet/pods/656e7256-97a8-4036-b7b5-62c66bf06129/volumes" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.614751 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-55bfbb9895-lchg7"] Dec 09 09:07:33 crc kubenswrapper[4786]: E1209 09:07:33.616118 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="extract-content" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.616139 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="extract-content" Dec 09 09:07:33 crc kubenswrapper[4786]: E1209 09:07:33.616170 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="registry-server" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.616177 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="registry-server" Dec 09 09:07:33 crc kubenswrapper[4786]: E1209 09:07:33.616207 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="extract-utilities" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.616214 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="extract-utilities" Dec 09 09:07:33 crc kubenswrapper[4786]: E1209 09:07:33.616250 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.616257 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api" Dec 09 09:07:33 crc kubenswrapper[4786]: E1209 09:07:33.616295 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api-log" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.616301 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api-log" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.617902 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api-log" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.617933 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="656e7256-97a8-4036-b7b5-62c66bf06129" containerName="registry-server" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.617967 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3b7375b-20bf-4c11-bbff-ae554685503a" containerName="barbican-api" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.640797 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.642053 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.643925 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.646182 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.648013 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.665935 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55bfbb9895-lchg7"] Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.751995 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025e29a5-c1a7-46fe-a47d-4b3248fd6320-run-httpd\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.752103 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/025e29a5-c1a7-46fe-a47d-4b3248fd6320-etc-swift\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.752129 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btd8l\" (UniqueName: \"kubernetes.io/projected/025e29a5-c1a7-46fe-a47d-4b3248fd6320-kube-api-access-btd8l\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.752187 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025e29a5-c1a7-46fe-a47d-4b3248fd6320-log-httpd\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.752230 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-public-tls-certs\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.752275 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-internal-tls-certs\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.752294 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-combined-ca-bundle\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.752336 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-config-data\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.854569 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-combined-ca-bundle\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.854660 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-config-data\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.854745 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025e29a5-c1a7-46fe-a47d-4b3248fd6320-run-httpd\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.854776 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/025e29a5-c1a7-46fe-a47d-4b3248fd6320-etc-swift\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.854797 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btd8l\" (UniqueName: \"kubernetes.io/projected/025e29a5-c1a7-46fe-a47d-4b3248fd6320-kube-api-access-btd8l\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.854830 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025e29a5-c1a7-46fe-a47d-4b3248fd6320-log-httpd\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.854877 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-public-tls-certs\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.854942 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-internal-tls-certs\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.856237 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025e29a5-c1a7-46fe-a47d-4b3248fd6320-log-httpd\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.860706 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/025e29a5-c1a7-46fe-a47d-4b3248fd6320-run-httpd\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.871701 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-combined-ca-bundle\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.872630 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-config-data\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.874111 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-internal-tls-certs\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.877818 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/025e29a5-c1a7-46fe-a47d-4b3248fd6320-public-tls-certs\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.884117 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btd8l\" (UniqueName: \"kubernetes.io/projected/025e29a5-c1a7-46fe-a47d-4b3248fd6320-kube-api-access-btd8l\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:33 crc kubenswrapper[4786]: I1209 09:07:33.889276 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/025e29a5-c1a7-46fe-a47d-4b3248fd6320-etc-swift\") pod \"swift-proxy-55bfbb9895-lchg7\" (UID: \"025e29a5-c1a7-46fe-a47d-4b3248fd6320\") " pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.002290 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.048211 4786 generic.go:334] "Generic (PLEG): container finished" podID="e3e17544-6307-4a46-b381-34744a99cbb5" containerID="3e67b9c8da3783a6be6596bf5465e26fbef0a2c25e297dee79a95dade6bb6811" exitCode=0 Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.049272 4786 generic.go:334] "Generic (PLEG): container finished" podID="e3e17544-6307-4a46-b381-34744a99cbb5" containerID="52bbfbd410bb33df616342360075e577b9c18a627484ef75577a23a4347a7ebd" exitCode=143 Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.048334 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3e17544-6307-4a46-b381-34744a99cbb5","Type":"ContainerDied","Data":"3e67b9c8da3783a6be6596bf5465e26fbef0a2c25e297dee79a95dade6bb6811"} Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.049524 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3e17544-6307-4a46-b381-34744a99cbb5","Type":"ContainerDied","Data":"52bbfbd410bb33df616342360075e577b9c18a627484ef75577a23a4347a7ebd"} Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.601765 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.602465 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="ceilometer-central-agent" containerID="cri-o://6e9b40d63696eb904a5ceffef24023ccf24c79b7731741fcb00d6eab5d09724d" gracePeriod=30 Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.602570 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="proxy-httpd" containerID="cri-o://df8141895c8fa8895c7c7b8f42c713557a3595cc3050fd8795442d51ab285ed7" gracePeriod=30 Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.602691 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="sg-core" containerID="cri-o://b700ef67f4e393732db218ca53aeec390ac5e4b939658ca23c18f5e0c0f5477b" gracePeriod=30 Dec 09 09:07:34 crc kubenswrapper[4786]: I1209 09:07:34.602656 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="ceilometer-notification-agent" containerID="cri-o://f9b3db817f9a6f0a23a17815b42d0b650b8fb72c713dfcd4b59888f0fb7c08c7" gracePeriod=30 Dec 09 09:07:35 crc kubenswrapper[4786]: I1209 09:07:35.067514 4786 generic.go:334] "Generic (PLEG): container finished" podID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerID="df8141895c8fa8895c7c7b8f42c713557a3595cc3050fd8795442d51ab285ed7" exitCode=0 Dec 09 09:07:35 crc kubenswrapper[4786]: I1209 09:07:35.067954 4786 generic.go:334] "Generic (PLEG): container finished" podID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerID="b700ef67f4e393732db218ca53aeec390ac5e4b939658ca23c18f5e0c0f5477b" exitCode=2 Dec 09 09:07:35 crc kubenswrapper[4786]: I1209 09:07:35.067692 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerDied","Data":"df8141895c8fa8895c7c7b8f42c713557a3595cc3050fd8795442d51ab285ed7"} Dec 09 09:07:35 crc kubenswrapper[4786]: I1209 09:07:35.068007 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerDied","Data":"b700ef67f4e393732db218ca53aeec390ac5e4b939658ca23c18f5e0c0f5477b"} Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.029570 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.084163 4786 generic.go:334] "Generic (PLEG): container finished" podID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerID="6e9b40d63696eb904a5ceffef24023ccf24c79b7731741fcb00d6eab5d09724d" exitCode=0 Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.084236 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerDied","Data":"6e9b40d63696eb904a5ceffef24023ccf24c79b7731741fcb00d6eab5d09724d"} Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.140692 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.155373 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.216767 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.218416 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58674975df-v9bj8"] Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.219194 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58674975df-v9bj8" podUID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerName="dnsmasq-dns" containerID="cri-o://7b6e5a18d6b802ca8e30f1648c5887a6844c79e83461fcce814415a98636eb66" gracePeriod=10 Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.229760 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 09:07:36 crc kubenswrapper[4786]: I1209 09:07:36.328483 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 09:07:37 crc kubenswrapper[4786]: I1209 09:07:37.121259 4786 generic.go:334] "Generic (PLEG): container finished" podID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerID="7b6e5a18d6b802ca8e30f1648c5887a6844c79e83461fcce814415a98636eb66" exitCode=0 Dec 09 09:07:37 crc kubenswrapper[4786]: I1209 09:07:37.121340 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58674975df-v9bj8" event={"ID":"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3","Type":"ContainerDied","Data":"7b6e5a18d6b802ca8e30f1648c5887a6844c79e83461fcce814415a98636eb66"} Dec 09 09:07:37 crc kubenswrapper[4786]: I1209 09:07:37.123537 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerName="probe" containerID="cri-o://328475f5d527b288ab9ea75bdeae456b4281611e024661ac1157e83f377af00e" gracePeriod=30 Dec 09 09:07:37 crc kubenswrapper[4786]: I1209 09:07:37.122418 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerName="cinder-scheduler" containerID="cri-o://d14557c51a504282fd02d7c000ab7967a88368bd9a1f363820ea87544ec58507" gracePeriod=30 Dec 09 09:07:37 crc kubenswrapper[4786]: I1209 09:07:37.124174 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:37 crc kubenswrapper[4786]: I1209 09:07:37.168999 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.164662 4786 generic.go:334] "Generic (PLEG): container finished" podID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerID="f9b3db817f9a6f0a23a17815b42d0b650b8fb72c713dfcd4b59888f0fb7c08c7" exitCode=0 Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.164725 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerDied","Data":"f9b3db817f9a6f0a23a17815b42d0b650b8fb72c713dfcd4b59888f0fb7c08c7"} Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.169610 4786 generic.go:334] "Generic (PLEG): container finished" podID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerID="328475f5d527b288ab9ea75bdeae456b4281611e024661ac1157e83f377af00e" exitCode=0 Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.169646 4786 generic.go:334] "Generic (PLEG): container finished" podID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerID="d14557c51a504282fd02d7c000ab7967a88368bd9a1f363820ea87544ec58507" exitCode=0 Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.169932 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b9b641af-aaf9-493f-b738-89b62abb3e95","Type":"ContainerDied","Data":"328475f5d527b288ab9ea75bdeae456b4281611e024661ac1157e83f377af00e"} Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.170032 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b9b641af-aaf9-493f-b738-89b62abb3e95","Type":"ContainerDied","Data":"d14557c51a504282fd02d7c000ab7967a88368bd9a1f363820ea87544ec58507"} Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.582735 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bvc2g"] Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.584138 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bvc2g" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.621579 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bvc2g"] Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.724990 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-mxxjn"] Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.730338 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxxjn" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.778061 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stszk\" (UniqueName: \"kubernetes.io/projected/6046a22f-fd23-407b-a9ae-0d02d4b41170-kube-api-access-stszk\") pod \"nova-api-db-create-bvc2g\" (UID: \"6046a22f-fd23-407b-a9ae-0d02d4b41170\") " pod="openstack/nova-api-db-create-bvc2g" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.782878 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mxxjn"] Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.820358 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mzjbq"] Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.826187 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzjbq" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.837179 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mzjbq"] Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.880395 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stszk\" (UniqueName: \"kubernetes.io/projected/6046a22f-fd23-407b-a9ae-0d02d4b41170-kube-api-access-stszk\") pod \"nova-api-db-create-bvc2g\" (UID: \"6046a22f-fd23-407b-a9ae-0d02d4b41170\") " pod="openstack/nova-api-db-create-bvc2g" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.880501 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcxkf\" (UniqueName: \"kubernetes.io/projected/87bf6602-f74b-49e8-874d-1ea9bbf6ec7b-kube-api-access-xcxkf\") pod \"nova-cell0-db-create-mxxjn\" (UID: \"87bf6602-f74b-49e8-874d-1ea9bbf6ec7b\") " pod="openstack/nova-cell0-db-create-mxxjn" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.902367 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stszk\" (UniqueName: \"kubernetes.io/projected/6046a22f-fd23-407b-a9ae-0d02d4b41170-kube-api-access-stszk\") pod \"nova-api-db-create-bvc2g\" (UID: \"6046a22f-fd23-407b-a9ae-0d02d4b41170\") " pod="openstack/nova-api-db-create-bvc2g" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.910374 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bvc2g" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.949255 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58674975df-v9bj8" podUID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.982500 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcxkf\" (UniqueName: \"kubernetes.io/projected/87bf6602-f74b-49e8-874d-1ea9bbf6ec7b-kube-api-access-xcxkf\") pod \"nova-cell0-db-create-mxxjn\" (UID: \"87bf6602-f74b-49e8-874d-1ea9bbf6ec7b\") " pod="openstack/nova-cell0-db-create-mxxjn" Dec 09 09:07:39 crc kubenswrapper[4786]: I1209 09:07:39.982615 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wswgn\" (UniqueName: \"kubernetes.io/projected/f678c744-2a47-4f18-8e01-6f438a6e46e5-kube-api-access-wswgn\") pod \"nova-cell1-db-create-mzjbq\" (UID: \"f678c744-2a47-4f18-8e01-6f438a6e46e5\") " pod="openstack/nova-cell1-db-create-mzjbq" Dec 09 09:07:40 crc kubenswrapper[4786]: I1209 09:07:40.002939 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcxkf\" (UniqueName: \"kubernetes.io/projected/87bf6602-f74b-49e8-874d-1ea9bbf6ec7b-kube-api-access-xcxkf\") pod \"nova-cell0-db-create-mxxjn\" (UID: \"87bf6602-f74b-49e8-874d-1ea9bbf6ec7b\") " pod="openstack/nova-cell0-db-create-mxxjn" Dec 09 09:07:40 crc kubenswrapper[4786]: I1209 09:07:40.066620 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxxjn" Dec 09 09:07:40 crc kubenswrapper[4786]: I1209 09:07:40.087723 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wswgn\" (UniqueName: \"kubernetes.io/projected/f678c744-2a47-4f18-8e01-6f438a6e46e5-kube-api-access-wswgn\") pod \"nova-cell1-db-create-mzjbq\" (UID: \"f678c744-2a47-4f18-8e01-6f438a6e46e5\") " pod="openstack/nova-cell1-db-create-mzjbq" Dec 09 09:07:40 crc kubenswrapper[4786]: I1209 09:07:40.120466 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wswgn\" (UniqueName: \"kubernetes.io/projected/f678c744-2a47-4f18-8e01-6f438a6e46e5-kube-api-access-wswgn\") pod \"nova-cell1-db-create-mzjbq\" (UID: \"f678c744-2a47-4f18-8e01-6f438a6e46e5\") " pod="openstack/nova-cell1-db-create-mzjbq" Dec 09 09:07:40 crc kubenswrapper[4786]: I1209 09:07:40.153026 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzjbq" Dec 09 09:07:44 crc kubenswrapper[4786]: I1209 09:07:44.678399 4786 generic.go:334] "Generic (PLEG): container finished" podID="1fe0b12b-06a5-45ae-8a51-073fb093cd54" containerID="f9276b12c55a2e685ffdba9d25335ebeecbd4fe0e9092bd2d5e27acc02fe6a70" exitCode=0 Dec 09 09:07:44 crc kubenswrapper[4786]: I1209 09:07:44.678969 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hzbbl" event={"ID":"1fe0b12b-06a5-45ae-8a51-073fb093cd54","Type":"ContainerDied","Data":"f9276b12c55a2e685ffdba9d25335ebeecbd4fe0e9092bd2d5e27acc02fe6a70"} Dec 09 09:07:44 crc kubenswrapper[4786]: I1209 09:07:44.949216 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-58674975df-v9bj8" podUID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.168:5353: connect: connection refused" Dec 09 09:07:45 crc kubenswrapper[4786]: E1209 09:07:45.389957 4786 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Dec 09 09:07:45 crc kubenswrapper[4786]: E1209 09:07:45.390027 4786 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.200:5001/podified-master-centos10/openstack-openstackclient:watcher_latest" Dec 09 09:07:45 crc kubenswrapper[4786]: E1209 09:07:45.390176 4786 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:38.102.83.200:5001/podified-master-centos10/openstack-openstackclient:watcher_latest,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n677h5b5hcbh87h674h57ch5f5h8fh7dhf8h588h89h5b7hbfh75h65fh54fh686h686h5bch5c8h5f7hfch567h95h58ch646h655h54dh645hf6h5d7q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qdrdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(2423b332-8b9b-4a26-996b-582194eca3b7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 09 09:07:45 crc kubenswrapper[4786]: E1209 09:07:45.391445 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="2423b332-8b9b-4a26-996b-582194eca3b7" Dec 09 09:07:45 crc kubenswrapper[4786]: I1209 09:07:45.857006 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"e3e17544-6307-4a46-b381-34744a99cbb5","Type":"ContainerDied","Data":"553fa0b970bfcc8bc1487a81b12c1fd7b1d902687c8e072d92e8743002e629ba"} Dec 09 09:07:45 crc kubenswrapper[4786]: I1209 09:07:45.857632 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="553fa0b970bfcc8bc1487a81b12c1fd7b1d902687c8e072d92e8743002e629ba" Dec 09 09:07:45 crc kubenswrapper[4786]: I1209 09:07:45.865615 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-6m994" event={"ID":"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c","Type":"ContainerDied","Data":"11c44fbbf3908d028a4f5bd709ccf2bea2fd2b4b735d272d4316873dcce49074"} Dec 09 09:07:45 crc kubenswrapper[4786]: I1209 09:07:45.865711 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11c44fbbf3908d028a4f5bd709ccf2bea2fd2b4b735d272d4316873dcce49074" Dec 09 09:07:45 crc kubenswrapper[4786]: E1209 09:07:45.879291 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.200:5001/podified-master-centos10/openstack-openstackclient:watcher_latest\\\"\"" pod="openstack/openstackclient" podUID="2423b332-8b9b-4a26-996b-582194eca3b7" Dec 09 09:07:45 crc kubenswrapper[4786]: I1209 09:07:45.942320 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6m994" Dec 09 09:07:45 crc kubenswrapper[4786]: I1209 09:07:45.981215 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.120216 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-scripts\") pod \"e3e17544-6307-4a46-b381-34744a99cbb5\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.120332 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wg82p\" (UniqueName: \"kubernetes.io/projected/e3e17544-6307-4a46-b381-34744a99cbb5-kube-api-access-wg82p\") pod \"e3e17544-6307-4a46-b381-34744a99cbb5\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.120385 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3e17544-6307-4a46-b381-34744a99cbb5-etc-machine-id\") pod \"e3e17544-6307-4a46-b381-34744a99cbb5\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.120960 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-config-data\") pod \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.121034 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data-custom\") pod \"e3e17544-6307-4a46-b381-34744a99cbb5\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.121131 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-combined-ca-bundle\") pod \"e3e17544-6307-4a46-b381-34744a99cbb5\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.121189 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-combined-ca-bundle\") pod \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.121261 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3e17544-6307-4a46-b381-34744a99cbb5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e3e17544-6307-4a46-b381-34744a99cbb5" (UID: "e3e17544-6307-4a46-b381-34744a99cbb5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.121315 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e17544-6307-4a46-b381-34744a99cbb5-logs\") pod \"e3e17544-6307-4a46-b381-34744a99cbb5\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.121528 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data\") pod \"e3e17544-6307-4a46-b381-34744a99cbb5\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.121590 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-db-sync-config-data\") pod \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.121633 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrsfh\" (UniqueName: \"kubernetes.io/projected/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-kube-api-access-nrsfh\") pod \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\" (UID: \"4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.122481 4786 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e3e17544-6307-4a46-b381-34744a99cbb5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.124874 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3e17544-6307-4a46-b381-34744a99cbb5-logs" (OuterVolumeSpecName: "logs") pod "e3e17544-6307-4a46-b381-34744a99cbb5" (UID: "e3e17544-6307-4a46-b381-34744a99cbb5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.129594 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e3e17544-6307-4a46-b381-34744a99cbb5" (UID: "e3e17544-6307-4a46-b381-34744a99cbb5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.129637 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-kube-api-access-nrsfh" (OuterVolumeSpecName: "kube-api-access-nrsfh") pod "4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" (UID: "4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c"). InnerVolumeSpecName "kube-api-access-nrsfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.129743 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-scripts" (OuterVolumeSpecName: "scripts") pod "e3e17544-6307-4a46-b381-34744a99cbb5" (UID: "e3e17544-6307-4a46-b381-34744a99cbb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.132696 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" (UID: "4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.141954 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3e17544-6307-4a46-b381-34744a99cbb5-kube-api-access-wg82p" (OuterVolumeSpecName: "kube-api-access-wg82p") pod "e3e17544-6307-4a46-b381-34744a99cbb5" (UID: "e3e17544-6307-4a46-b381-34744a99cbb5"). InnerVolumeSpecName "kube-api-access-wg82p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.172990 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" (UID: "4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.175965 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3e17544-6307-4a46-b381-34744a99cbb5" (UID: "e3e17544-6307-4a46-b381-34744a99cbb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.210485 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.222775 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data" (OuterVolumeSpecName: "config-data") pod "e3e17544-6307-4a46-b381-34744a99cbb5" (UID: "e3e17544-6307-4a46-b381-34744a99cbb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.223550 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data\") pod \"b9b641af-aaf9-493f-b738-89b62abb3e95\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.223647 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-combined-ca-bundle\") pod \"b9b641af-aaf9-493f-b738-89b62abb3e95\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.223681 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-scripts\") pod \"b9b641af-aaf9-493f-b738-89b62abb3e95\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.223706 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9b641af-aaf9-493f-b738-89b62abb3e95-etc-machine-id\") pod \"b9b641af-aaf9-493f-b738-89b62abb3e95\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.223730 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data-custom\") pod \"b9b641af-aaf9-493f-b738-89b62abb3e95\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.223777 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwbcv\" (UniqueName: \"kubernetes.io/projected/b9b641af-aaf9-493f-b738-89b62abb3e95-kube-api-access-zwbcv\") pod \"b9b641af-aaf9-493f-b738-89b62abb3e95\" (UID: \"b9b641af-aaf9-493f-b738-89b62abb3e95\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.223817 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data\") pod \"e3e17544-6307-4a46-b381-34744a99cbb5\" (UID: \"e3e17544-6307-4a46-b381-34744a99cbb5\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.223925 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b9b641af-aaf9-493f-b738-89b62abb3e95-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "b9b641af-aaf9-493f-b738-89b62abb3e95" (UID: "b9b641af-aaf9-493f-b738-89b62abb3e95"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: W1209 09:07:46.224394 4786 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/e3e17544-6307-4a46-b381-34744a99cbb5/volumes/kubernetes.io~secret/config-data Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.224638 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data" (OuterVolumeSpecName: "config-data") pod "e3e17544-6307-4a46-b381-34744a99cbb5" (UID: "e3e17544-6307-4a46-b381-34744a99cbb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.224307 4786 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.225079 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrsfh\" (UniqueName: \"kubernetes.io/projected/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-kube-api-access-nrsfh\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.225183 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.225299 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wg82p\" (UniqueName: \"kubernetes.io/projected/e3e17544-6307-4a46-b381-34744a99cbb5-kube-api-access-wg82p\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.225391 4786 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.225471 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.225543 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.225606 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3e17544-6307-4a46-b381-34744a99cbb5-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.229916 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-scripts" (OuterVolumeSpecName: "scripts") pod "b9b641af-aaf9-493f-b738-89b62abb3e95" (UID: "b9b641af-aaf9-493f-b738-89b62abb3e95"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.231920 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b9b641af-aaf9-493f-b738-89b62abb3e95" (UID: "b9b641af-aaf9-493f-b738-89b62abb3e95"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.249811 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9b641af-aaf9-493f-b738-89b62abb3e95-kube-api-access-zwbcv" (OuterVolumeSpecName: "kube-api-access-zwbcv") pod "b9b641af-aaf9-493f-b738-89b62abb3e95" (UID: "b9b641af-aaf9-493f-b738-89b62abb3e95"). InnerVolumeSpecName "kube-api-access-zwbcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.253355 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-config-data" (OuterVolumeSpecName: "config-data") pod "4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" (UID: "4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.288487 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.310436 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9b641af-aaf9-493f-b738-89b62abb3e95" (UID: "b9b641af-aaf9-493f-b738-89b62abb3e95"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.328520 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-sb\") pod \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.328757 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-config\") pod \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.328835 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-svc\") pod \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.328876 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-swift-storage-0\") pod \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.328945 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-nb\") pod \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.329081 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gngrz\" (UniqueName: \"kubernetes.io/projected/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-kube-api-access-gngrz\") pod \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\" (UID: \"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.329759 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.329775 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.329785 4786 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/b9b641af-aaf9-493f-b738-89b62abb3e95-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.329798 4786 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.329809 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwbcv\" (UniqueName: \"kubernetes.io/projected/b9b641af-aaf9-493f-b738-89b62abb3e95-kube-api-access-zwbcv\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.329820 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e3e17544-6307-4a46-b381-34744a99cbb5-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.329832 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.353971 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-kube-api-access-gngrz" (OuterVolumeSpecName: "kube-api-access-gngrz") pod "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" (UID: "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3"). InnerVolumeSpecName "kube-api-access-gngrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.431507 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gngrz\" (UniqueName: \"kubernetes.io/projected/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-kube-api-access-gngrz\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.451642 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" (UID: "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.462646 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data" (OuterVolumeSpecName: "config-data") pod "b9b641af-aaf9-493f-b738-89b62abb3e95" (UID: "b9b641af-aaf9-493f-b738-89b62abb3e95"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.479582 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" (UID: "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.483633 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" (UID: "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.518964 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" (UID: "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.536655 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-config" (OuterVolumeSpecName: "config") pod "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" (UID: "e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.537523 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.537560 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.537571 4786 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.537585 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.537599 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9b641af-aaf9-493f-b738-89b62abb3e95-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.537609 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.694318 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.709753 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.851526 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqwgb\" (UniqueName: \"kubernetes.io/projected/ed942767-e79e-40d4-ab0e-b47aff280e56-kube-api-access-nqwgb\") pod \"ed942767-e79e-40d4-ab0e-b47aff280e56\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.851596 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-config-data\") pod \"ed942767-e79e-40d4-ab0e-b47aff280e56\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.851676 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-scripts\") pod \"ed942767-e79e-40d4-ab0e-b47aff280e56\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.851778 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-config\") pod \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.851806 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/1fe0b12b-06a5-45ae-8a51-073fb093cd54-kube-api-access-ph7lk\") pod \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.851824 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-combined-ca-bundle\") pod \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\" (UID: \"1fe0b12b-06a5-45ae-8a51-073fb093cd54\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.851861 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-sg-core-conf-yaml\") pod \"ed942767-e79e-40d4-ab0e-b47aff280e56\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.851984 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-run-httpd\") pod \"ed942767-e79e-40d4-ab0e-b47aff280e56\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.852045 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-combined-ca-bundle\") pod \"ed942767-e79e-40d4-ab0e-b47aff280e56\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.852081 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-log-httpd\") pod \"ed942767-e79e-40d4-ab0e-b47aff280e56\" (UID: \"ed942767-e79e-40d4-ab0e-b47aff280e56\") " Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.853492 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ed942767-e79e-40d4-ab0e-b47aff280e56" (UID: "ed942767-e79e-40d4-ab0e-b47aff280e56"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.854104 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ed942767-e79e-40d4-ab0e-b47aff280e56" (UID: "ed942767-e79e-40d4-ab0e-b47aff280e56"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.870045 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-scripts" (OuterVolumeSpecName: "scripts") pod "ed942767-e79e-40d4-ab0e-b47aff280e56" (UID: "ed942767-e79e-40d4-ab0e-b47aff280e56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.874887 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fe0b12b-06a5-45ae-8a51-073fb093cd54-kube-api-access-ph7lk" (OuterVolumeSpecName: "kube-api-access-ph7lk") pod "1fe0b12b-06a5-45ae-8a51-073fb093cd54" (UID: "1fe0b12b-06a5-45ae-8a51-073fb093cd54"). InnerVolumeSpecName "kube-api-access-ph7lk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.882233 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed942767-e79e-40d4-ab0e-b47aff280e56-kube-api-access-nqwgb" (OuterVolumeSpecName: "kube-api-access-nqwgb") pod "ed942767-e79e-40d4-ab0e-b47aff280e56" (UID: "ed942767-e79e-40d4-ab0e-b47aff280e56"). InnerVolumeSpecName "kube-api-access-nqwgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.886474 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"b9b641af-aaf9-493f-b738-89b62abb3e95","Type":"ContainerDied","Data":"32d68de9a99a370e44035f4235764d1f574fde522d667f61bbf3b97138ceb2ac"} Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.886558 4786 scope.go:117] "RemoveContainer" containerID="328475f5d527b288ab9ea75bdeae456b4281611e024661ac1157e83f377af00e" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.886792 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 09:07:46 crc kubenswrapper[4786]: I1209 09:07:46.966469 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.176:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.063998 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58674975df-v9bj8" event={"ID":"e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3","Type":"ContainerDied","Data":"41e993ea5cfe69ac4a19e13a624b7bdadbd6b55190c27fe16d1701ea3e81c978"} Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.064277 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58674975df-v9bj8" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.070155 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1fe0b12b-06a5-45ae-8a51-073fb093cd54" (UID: "1fe0b12b-06a5-45ae-8a51-073fb093cd54"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.080413 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-config" (OuterVolumeSpecName: "config") pod "1fe0b12b-06a5-45ae-8a51-073fb093cd54" (UID: "1fe0b12b-06a5-45ae-8a51-073fb093cd54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.096053 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.097184 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ed942767-e79e-40d4-ab0e-b47aff280e56","Type":"ContainerDied","Data":"56cb03be99c6153d2e0d95d226077b73796cc7cdc5d07472ed69123e94a10c13"} Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.114005 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.114674 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-hzbbl" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.114784 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-hzbbl" event={"ID":"1fe0b12b-06a5-45ae-8a51-073fb093cd54","Type":"ContainerDied","Data":"e89c6020a450c4a6b67bf0b0224de9ad7c70944131d665069740cf1c065c2701"} Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.114811 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e89c6020a450c4a6b67bf0b0224de9ad7c70944131d665069740cf1c065c2701" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.114882 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-6m994" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.160531 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ed942767-e79e-40d4-ab0e-b47aff280e56" (UID: "ed942767-e79e-40d4-ab0e-b47aff280e56"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.182682 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-mxxjn"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.184649 4786 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.184707 4786 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ed942767-e79e-40d4-ab0e-b47aff280e56-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.184719 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqwgb\" (UniqueName: \"kubernetes.io/projected/ed942767-e79e-40d4-ab0e-b47aff280e56-kube-api-access-nqwgb\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.184763 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.184776 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.184786 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph7lk\" (UniqueName: \"kubernetes.io/projected/1fe0b12b-06a5-45ae-8a51-073fb093cd54-kube-api-access-ph7lk\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.184801 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1fe0b12b-06a5-45ae-8a51-073fb093cd54-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.184817 4786 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.227885 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ed942767-e79e-40d4-ab0e-b47aff280e56" (UID: "ed942767-e79e-40d4-ab0e-b47aff280e56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.287476 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.486612 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-config-data" (OuterVolumeSpecName: "config-data") pod "ed942767-e79e-40d4-ab0e-b47aff280e56" (UID: "ed942767-e79e-40d4-ab0e-b47aff280e56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.534193 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ed942767-e79e-40d4-ab0e-b47aff280e56-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.953316 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bvc2g"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.953369 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mzjbq"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.953381 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-55bfbb9895-lchg7"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.953394 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6999f7dcbd-6rspb"] Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.960121 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1fe0b12b-06a5-45ae-8a51-073fb093cd54" containerName="neutron-db-sync" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962156 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1fe0b12b-06a5-45ae-8a51-073fb093cd54" containerName="neutron-db-sync" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962176 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="ceilometer-central-agent" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962183 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="ceilometer-central-agent" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962211 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="sg-core" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962218 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="sg-core" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962231 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="ceilometer-notification-agent" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962238 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="ceilometer-notification-agent" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962258 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="proxy-httpd" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962264 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="proxy-httpd" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962288 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerName="init" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962294 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerName="init" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962325 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" containerName="glance-db-sync" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962333 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" containerName="glance-db-sync" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962360 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerName="cinder-scheduler" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962367 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerName="cinder-scheduler" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962378 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerName="probe" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962384 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerName="probe" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962402 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" containerName="cinder-api" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962409 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" containerName="cinder-api" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962465 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerName="dnsmasq-dns" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962473 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerName="dnsmasq-dns" Dec 09 09:07:47 crc kubenswrapper[4786]: E1209 09:07:47.962490 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" containerName="cinder-api-log" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.962496 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" containerName="cinder-api-log" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.963516 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" containerName="cinder-api-log" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.963533 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="proxy-httpd" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.963547 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" containerName="dnsmasq-dns" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.963576 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="ceilometer-notification-agent" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.963587 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerName="cinder-scheduler" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.963596 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1fe0b12b-06a5-45ae-8a51-073fb093cd54" containerName="neutron-db-sync" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.963608 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" containerName="cinder-api" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.963618 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="ceilometer-central-agent" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.966154 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" containerName="sg-core" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.966174 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9b641af-aaf9-493f-b738-89b62abb3e95" containerName="probe" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.966181 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" containerName="glance-db-sync" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.970205 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6999f7dcbd-6rspb"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.971542 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fcc566bf9-5h2pv"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.974019 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.981728 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.982018 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.982278 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j25lk" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.982548 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.982706 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fcc566bf9-5h2pv"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.986705 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fcc566bf9-5h2pv"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.986755 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f47965bdf-2nd22"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.988287 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fcc566bf9-5h2pv" Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.991097 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f47965bdf-2nd22"] Dec 09 09:07:47 crc kubenswrapper[4786]: I1209 09:07:47.991253 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.045472 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-httpd-config\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.045750 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-config\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.045841 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-ovndb-tls-certs\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.045971 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-combined-ca-bundle\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.046117 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nzww\" (UniqueName: \"kubernetes.io/projected/a1bf94f1-1c69-4999-bd4f-202175e2df7c-kube-api-access-9nzww\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.142752 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mzjbq" event={"ID":"f678c744-2a47-4f18-8e01-6f438a6e46e5","Type":"ContainerStarted","Data":"542f882362c9bb6fc5a23b2a35c6949d67f6ee570a4475e624a8a8c8a771e29b"} Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147607 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147698 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp62p\" (UniqueName: \"kubernetes.io/projected/954dc35f-7a55-408f-8d7c-2ad4420dc25e-kube-api-access-hp62p\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147732 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-httpd-config\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147759 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-config\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147782 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-ovndb-tls-certs\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147799 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147813 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147851 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-combined-ca-bundle\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147888 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-config\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147937 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-svc\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.147973 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nzww\" (UniqueName: \"kubernetes.io/projected/a1bf94f1-1c69-4999-bd4f-202175e2df7c-kube-api-access-9nzww\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.149292 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55bfbb9895-lchg7" event={"ID":"025e29a5-c1a7-46fe-a47d-4b3248fd6320","Type":"ContainerStarted","Data":"62a9793432d6675c1059ddc5d901e7766b1449d67fb532ba72ee5b2f80662871"} Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.154835 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-ovndb-tls-certs\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.155376 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-httpd-config\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.157376 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-combined-ca-bundle\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.157479 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxxjn" event={"ID":"87bf6602-f74b-49e8-874d-1ea9bbf6ec7b","Type":"ContainerStarted","Data":"5392d99607be9caa3dcca56e79355f5d3f4b52912a875e57dd3e8b9183e39e94"} Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.161155 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-config\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.168594 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bvc2g" event={"ID":"6046a22f-fd23-407b-a9ae-0d02d4b41170","Type":"ContainerStarted","Data":"27d3cb048b6c4b797dda9591d01867d72aed8f71a34280d11ef365ad96675254"} Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.174294 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nzww\" (UniqueName: \"kubernetes.io/projected/a1bf94f1-1c69-4999-bd4f-202175e2df7c-kube-api-access-9nzww\") pod \"neutron-6999f7dcbd-6rspb\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.223870 4786 scope.go:117] "RemoveContainer" containerID="d14557c51a504282fd02d7c000ab7967a88368bd9a1f363820ea87544ec58507" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.228516 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.239875 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fcc566bf9-5h2pv" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.249527 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hp62p\" (UniqueName: \"kubernetes.io/projected/954dc35f-7a55-408f-8d7c-2ad4420dc25e-kube-api-access-hp62p\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.249589 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.249610 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.249671 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-config\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.249717 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-svc\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.249763 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.250689 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-swift-storage-0\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.250689 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-nb\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.251505 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-config\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.251999 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-svc\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.252937 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-sb\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.267154 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.288329 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hp62p\" (UniqueName: \"kubernetes.io/projected/954dc35f-7a55-408f-8d7c-2ad4420dc25e-kube-api-access-hp62p\") pod \"dnsmasq-dns-5f47965bdf-2nd22\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.288476 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.300706 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.305278 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.305840 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hjqlg" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.307892 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.330755 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.331305 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.373280 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8grp9\" (UniqueName: \"kubernetes.io/projected/0afc7d13-3b0f-4919-ab29-4d328c815a8a-kube-api-access-8grp9\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.373442 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.373496 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0afc7d13-3b0f-4919-ab29-4d328c815a8a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.374632 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.376130 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.376350 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.376514 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.380101 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.408295 4786 scope.go:117] "RemoveContainer" containerID="7b6e5a18d6b802ca8e30f1648c5887a6844c79e83461fcce814415a98636eb66" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.460478 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.482281 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.482450 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.482512 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.482696 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8grp9\" (UniqueName: \"kubernetes.io/projected/0afc7d13-3b0f-4919-ab29-4d328c815a8a-kube-api-access-8grp9\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.482735 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.482754 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0afc7d13-3b0f-4919-ab29-4d328c815a8a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.482874 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/0afc7d13-3b0f-4919-ab29-4d328c815a8a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.484309 4786 scope.go:117] "RemoveContainer" containerID="c9fecc3d360b8ad5eb49e52a978cb66c6b9c79037db24235b504d3d913096330" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.519090 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.522247 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-scripts\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.523972 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-config-data\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.541508 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.543737 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.543758 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0afc7d13-3b0f-4919-ab29-4d328c815a8a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.553855 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.559952 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.562471 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8grp9\" (UniqueName: \"kubernetes.io/projected/0afc7d13-3b0f-4919-ab29-4d328c815a8a-kube-api-access-8grp9\") pod \"cinder-scheduler-0\" (UID: \"0afc7d13-3b0f-4919-ab29-4d328c815a8a\") " pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.563352 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.563468 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.563572 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.567916 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.569308 4786 scope.go:117] "RemoveContainer" containerID="df8141895c8fa8895c7c7b8f42c713557a3595cc3050fd8795442d51ab285ed7" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.584038 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.619897 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58674975df-v9bj8"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.675019 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58674975df-v9bj8"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688242 4786 scope.go:117] "RemoveContainer" containerID="b700ef67f4e393732db218ca53aeec390ac5e4b939658ca23c18f5e0c0f5477b" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688594 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688728 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-scripts\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688786 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688814 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688879 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8b460d-7b22-4853-b592-ea61d203e5c1-logs\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688895 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688916 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb8b460d-7b22-4853-b592-ea61d203e5c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688970 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cz94\" (UniqueName: \"kubernetes.io/projected/bb8b460d-7b22-4853-b592-ea61d203e5c1-kube-api-access-7cz94\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.688998 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-config-data\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.696051 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.698823 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.701994 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.702340 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.703221 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h4v94" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.703344 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.755559 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.758735 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.762337 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.770121 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790673 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790739 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-config-data\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790763 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcrkt\" (UniqueName: \"kubernetes.io/projected/3dbe789f-f7cf-43e1-8320-51a882827757-kube-api-access-mcrkt\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790828 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-scripts\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790865 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790890 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-logs\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790908 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790950 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8b460d-7b22-4853-b592-ea61d203e5c1-logs\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790966 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.790987 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb8b460d-7b22-4853-b592-ea61d203e5c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.791022 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cz94\" (UniqueName: \"kubernetes.io/projected/bb8b460d-7b22-4853-b592-ea61d203e5c1-kube-api-access-7cz94\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.791043 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.791063 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-config-data\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.791081 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-scripts\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.791116 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.791144 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.797254 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb8b460d-7b22-4853-b592-ea61d203e5c1-logs\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.797782 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bb8b460d-7b22-4853-b592-ea61d203e5c1-etc-machine-id\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.811263 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-scripts\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.812030 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.818587 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-config-data-custom\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.827599 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cz94\" (UniqueName: \"kubernetes.io/projected/bb8b460d-7b22-4853-b592-ea61d203e5c1-kube-api-access-7cz94\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.837748 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-public-tls-certs\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.838473 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.851054 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.855578 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb8b460d-7b22-4853-b592-ea61d203e5c1-config-data\") pod \"cinder-api-0\" (UID: \"bb8b460d-7b22-4853-b592-ea61d203e5c1\") " pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.864707 4786 scope.go:117] "RemoveContainer" containerID="f9b3db817f9a6f0a23a17815b42d0b650b8fb72c713dfcd4b59888f0fb7c08c7" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.894782 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-run-httpd\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.894909 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk4wc\" (UniqueName: \"kubernetes.io/projected/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-kube-api-access-xk4wc\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.894997 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895032 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-scripts\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895075 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-scripts\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895098 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895131 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895179 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895234 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895300 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-config-data\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895332 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcrkt\" (UniqueName: \"kubernetes.io/projected/3dbe789f-f7cf-43e1-8320-51a882827757-kube-api-access-mcrkt\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895419 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-config-data\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895471 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-log-httpd\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.895501 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-logs\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.896093 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-logs\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.896526 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.898947 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.901161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-scripts\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.903520 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-config-data\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.904538 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.922352 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.929331 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.936306 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcrkt\" (UniqueName: \"kubernetes.io/projected/3dbe789f-f7cf-43e1-8320-51a882827757-kube-api-access-mcrkt\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:48 crc kubenswrapper[4786]: I1209 09:07:48.943556 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.000146 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-scripts\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.000717 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.000788 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.000985 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-config-data\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.001037 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-log-httpd\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.001106 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-run-httpd\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.001165 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xk4wc\" (UniqueName: \"kubernetes.io/projected/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-kube-api-access-xk4wc\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.002802 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-log-httpd\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.003271 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-run-httpd\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.010315 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.013599 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.015956 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.021557 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.023202 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.023369 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-config-data\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.023773 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-scripts\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.044463 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.069103 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xk4wc\" (UniqueName: \"kubernetes.io/projected/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-kube-api-access-xk4wc\") pod \"ceilometer-0\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: E1209 09:07:49.072766 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf678c744_2a47_4f18_8e01_6f438a6e46e5.slice/crio-1376f5bcc4ba1485a13800089dca49bd3790af9966b24e34250fdedf59967039.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6046a22f_fd23_407b_a9ae_0d02d4b41170.slice/crio-conmon-3fca5a3c4b48fce183f2478c014be97407825de1a0b45566a66a5e88a41e032f.scope\": RecentStats: unable to find data in memory cache]" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.182834 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.207636 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-logs\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.207669 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.207704 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.207741 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.207762 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.207822 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9mth\" (UniqueName: \"kubernetes.io/projected/832b0e68-172e-4166-9826-3d67c426c7e8-kube-api-access-h9mth\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.207850 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.213315 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.216581 4786 generic.go:334] "Generic (PLEG): container finished" podID="87bf6602-f74b-49e8-874d-1ea9bbf6ec7b" containerID="b688abd27ed49f3e9e0ed18ab7b5301653b7cb77b1cf9523c49e0e295b883d6d" exitCode=0 Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.223439 4786 scope.go:117] "RemoveContainer" containerID="6e9b40d63696eb904a5ceffef24023ccf24c79b7731741fcb00d6eab5d09724d" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.228544 4786 generic.go:334] "Generic (PLEG): container finished" podID="6046a22f-fd23-407b-a9ae-0d02d4b41170" containerID="3fca5a3c4b48fce183f2478c014be97407825de1a0b45566a66a5e88a41e032f" exitCode=0 Dec 09 09:07:49 crc kubenswrapper[4786]: W1209 09:07:49.239532 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod954dc35f_7a55_408f_8d7c_2ad4420dc25e.slice/crio-9ce70ae7e098e2ad97669dd5ad4c52ef4b02a8fee9772680ea6c61836f2f60c9 WatchSource:0}: Error finding container 9ce70ae7e098e2ad97669dd5ad4c52ef4b02a8fee9772680ea6c61836f2f60c9: Status 404 returned error can't find the container with id 9ce70ae7e098e2ad97669dd5ad4c52ef4b02a8fee9772680ea6c61836f2f60c9 Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.245617 4786 generic.go:334] "Generic (PLEG): container finished" podID="f678c744-2a47-4f18-8e01-6f438a6e46e5" containerID="1376f5bcc4ba1485a13800089dca49bd3790af9966b24e34250fdedf59967039" exitCode=0 Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.246107 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9b641af-aaf9-493f-b738-89b62abb3e95" path="/var/lib/kubelet/pods/b9b641af-aaf9-493f-b738-89b62abb3e95/volumes" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.330061 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-logs\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.331354 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-logs\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.330572 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.344344 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fcc566bf9-5h2pv" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.373758 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3e17544-6307-4a46-b381-34744a99cbb5" path="/var/lib/kubelet/pods/e3e17544-6307-4a46-b381-34744a99cbb5/volumes" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.377467 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.377637 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.377694 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.378005 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9mth\" (UniqueName: \"kubernetes.io/projected/832b0e68-172e-4166-9826-3d67c426c7e8-kube-api-access-h9mth\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.378110 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.384690 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.386449 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-config-data\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.387343 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.395638 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-scripts\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.399940 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3" path="/var/lib/kubelet/pods/e43cd47d-9fbb-4d30-8c38-20b2d42f1dd3/volumes" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.400730 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.408138 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ed942767-e79e-40d4-ab0e-b47aff280e56" path="/var/lib/kubelet/pods/ed942767-e79e-40d4-ab0e-b47aff280e56/volumes" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.409249 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxxjn" event={"ID":"87bf6602-f74b-49e8-874d-1ea9bbf6ec7b","Type":"ContainerDied","Data":"b688abd27ed49f3e9e0ed18ab7b5301653b7cb77b1cf9523c49e0e295b883d6d"} Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.409300 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f47965bdf-2nd22"] Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.409322 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bvc2g" event={"ID":"6046a22f-fd23-407b-a9ae-0d02d4b41170","Type":"ContainerDied","Data":"3fca5a3c4b48fce183f2478c014be97407825de1a0b45566a66a5e88a41e032f"} Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.409349 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mzjbq" event={"ID":"f678c744-2a47-4f18-8e01-6f438a6e46e5","Type":"ContainerDied","Data":"1376f5bcc4ba1485a13800089dca49bd3790af9966b24e34250fdedf59967039"} Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.409377 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55bfbb9895-lchg7" event={"ID":"025e29a5-c1a7-46fe-a47d-4b3248fd6320","Type":"ContainerStarted","Data":"b51ca9a934be79f61278cfb5902ffbec3430e0473d7dbd84c7abc847ea4cddc1"} Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.459734 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9mth\" (UniqueName: \"kubernetes.io/projected/832b0e68-172e-4166-9826-3d67c426c7e8-kube-api-access-h9mth\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.518664 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fcc566bf9-5h2pv"] Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.533729 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.560801 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fcc566bf9-5h2pv"] Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.580449 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6999f7dcbd-6rspb"] Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.767051 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.814488 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:07:49 crc kubenswrapper[4786]: I1209 09:07:49.983078 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.429575 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0afc7d13-3b0f-4919-ab29-4d328c815a8a","Type":"ContainerStarted","Data":"885d32de09775b45d17d809a0a21bdf762d8c0b1458596fc1574bed23a71a1fe"} Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.462307 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.465469 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" event={"ID":"954dc35f-7a55-408f-8d7c-2ad4420dc25e","Type":"ContainerStarted","Data":"00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2"} Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.465548 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" event={"ID":"954dc35f-7a55-408f-8d7c-2ad4420dc25e","Type":"ContainerStarted","Data":"9ce70ae7e098e2ad97669dd5ad4c52ef4b02a8fee9772680ea6c61836f2f60c9"} Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.524396 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-55bfbb9895-lchg7" event={"ID":"025e29a5-c1a7-46fe-a47d-4b3248fd6320","Type":"ContainerStarted","Data":"1ef8a47e4af1465e3135711a3c8a7cd4afccc06e56a36d49327b56cd2e38d40b"} Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.525560 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.525629 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.546881 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6999f7dcbd-6rspb" event={"ID":"a1bf94f1-1c69-4999-bd4f-202175e2df7c","Type":"ContainerStarted","Data":"26706c9750c31b72d5e33ecbc4c034eefab9794c34ae915f8a1144b7bf3783eb"} Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.574894 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bb8b460d-7b22-4853-b592-ea61d203e5c1","Type":"ContainerStarted","Data":"df415c2905854e0b9e866d7341c6277728c1650a1182ecb8232d6ae02c2a4d8f"} Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.620234 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-55bfbb9895-lchg7" podStartSLOduration=17.620194107 podStartE2EDuration="17.620194107s" podCreationTimestamp="2025-12-09 09:07:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:50.568474782 +0000 UTC m=+1436.452096018" watchObservedRunningTime="2025-12-09 09:07:50.620194107 +0000 UTC m=+1436.503815333" Dec 09 09:07:50 crc kubenswrapper[4786]: I1209 09:07:50.766669 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.263721 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.607534 4786 generic.go:334] "Generic (PLEG): container finished" podID="954dc35f-7a55-408f-8d7c-2ad4420dc25e" containerID="00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2" exitCode=0 Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.607622 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" event={"ID":"954dc35f-7a55-408f-8d7c-2ad4420dc25e","Type":"ContainerDied","Data":"00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2"} Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.609099 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerStarted","Data":"a7c45656cb1234dd14f2e3bf31cb8dce7f4e11ff7292be081489fd34d642eb33"} Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.611410 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6999f7dcbd-6rspb" event={"ID":"a1bf94f1-1c69-4999-bd4f-202175e2df7c","Type":"ContainerStarted","Data":"731f594f63c89d6272835af9db187e38dbfe6acb7626292e604c93812eed4fcf"} Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.623525 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"832b0e68-172e-4166-9826-3d67c426c7e8","Type":"ContainerStarted","Data":"5eefe8d116ff5efe35e8c489d1eae0d0b51741ab12add32daa040c3dbfb15fd8"} Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.652857 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3dbe789f-f7cf-43e1-8320-51a882827757","Type":"ContainerStarted","Data":"04bca1ea63b45d8507a092078a9fb150f841c59fa327575b5e39e916c670d8de"} Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.776120 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzjbq" Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.894161 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wswgn\" (UniqueName: \"kubernetes.io/projected/f678c744-2a47-4f18-8e01-6f438a6e46e5-kube-api-access-wswgn\") pod \"f678c744-2a47-4f18-8e01-6f438a6e46e5\" (UID: \"f678c744-2a47-4f18-8e01-6f438a6e46e5\") " Dec 09 09:07:51 crc kubenswrapper[4786]: I1209 09:07:51.925893 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f678c744-2a47-4f18-8e01-6f438a6e46e5-kube-api-access-wswgn" (OuterVolumeSpecName: "kube-api-access-wswgn") pod "f678c744-2a47-4f18-8e01-6f438a6e46e5" (UID: "f678c744-2a47-4f18-8e01-6f438a6e46e5"). InnerVolumeSpecName "kube-api-access-wswgn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.023131 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wswgn\" (UniqueName: \"kubernetes.io/projected/f678c744-2a47-4f18-8e01-6f438a6e46e5-kube-api-access-wswgn\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.189174 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bvc2g" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.228730 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxxjn" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.331741 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcxkf\" (UniqueName: \"kubernetes.io/projected/87bf6602-f74b-49e8-874d-1ea9bbf6ec7b-kube-api-access-xcxkf\") pod \"87bf6602-f74b-49e8-874d-1ea9bbf6ec7b\" (UID: \"87bf6602-f74b-49e8-874d-1ea9bbf6ec7b\") " Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.332038 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stszk\" (UniqueName: \"kubernetes.io/projected/6046a22f-fd23-407b-a9ae-0d02d4b41170-kube-api-access-stszk\") pod \"6046a22f-fd23-407b-a9ae-0d02d4b41170\" (UID: \"6046a22f-fd23-407b-a9ae-0d02d4b41170\") " Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.367691 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6046a22f-fd23-407b-a9ae-0d02d4b41170-kube-api-access-stszk" (OuterVolumeSpecName: "kube-api-access-stszk") pod "6046a22f-fd23-407b-a9ae-0d02d4b41170" (UID: "6046a22f-fd23-407b-a9ae-0d02d4b41170"). InnerVolumeSpecName "kube-api-access-stszk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.368030 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87bf6602-f74b-49e8-874d-1ea9bbf6ec7b-kube-api-access-xcxkf" (OuterVolumeSpecName: "kube-api-access-xcxkf") pod "87bf6602-f74b-49e8-874d-1ea9bbf6ec7b" (UID: "87bf6602-f74b-49e8-874d-1ea9bbf6ec7b"). InnerVolumeSpecName "kube-api-access-xcxkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.435174 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcxkf\" (UniqueName: \"kubernetes.io/projected/87bf6602-f74b-49e8-874d-1ea9bbf6ec7b-kube-api-access-xcxkf\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.435227 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stszk\" (UniqueName: \"kubernetes.io/projected/6046a22f-fd23-407b-a9ae-0d02d4b41170-kube-api-access-stszk\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.692070 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bvc2g" event={"ID":"6046a22f-fd23-407b-a9ae-0d02d4b41170","Type":"ContainerDied","Data":"27d3cb048b6c4b797dda9591d01867d72aed8f71a34280d11ef365ad96675254"} Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.692535 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d3cb048b6c4b797dda9591d01867d72aed8f71a34280d11ef365ad96675254" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.692629 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bvc2g" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.710931 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0afc7d13-3b0f-4919-ab29-4d328c815a8a","Type":"ContainerStarted","Data":"6bbb6bcbaf729ab6d90b998797ce76eb408f7fdf7b98864475fe1529a742cb80"} Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.723585 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mzjbq" event={"ID":"f678c744-2a47-4f18-8e01-6f438a6e46e5","Type":"ContainerDied","Data":"542f882362c9bb6fc5a23b2a35c6949d67f6ee570a4475e624a8a8c8a771e29b"} Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.723626 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="542f882362c9bb6fc5a23b2a35c6949d67f6ee570a4475e624a8a8c8a771e29b" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.723712 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzjbq" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.740491 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.775800 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" event={"ID":"954dc35f-7a55-408f-8d7c-2ad4420dc25e","Type":"ContainerStarted","Data":"d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177"} Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.777383 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.789362 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerStarted","Data":"2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc"} Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.814058 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6999f7dcbd-6rspb" event={"ID":"a1bf94f1-1c69-4999-bd4f-202175e2df7c","Type":"ContainerStarted","Data":"13c73ec9d7d33cf8aa6bb779ab661219e2eb8f59dd20c1917642a0c98ae7058b"} Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.814601 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.815294 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bb8b460d-7b22-4853-b592-ea61d203e5c1","Type":"ContainerStarted","Data":"9eedd3aeda5eefbb2469e701f0fa377126242f39c16f5593b86ee3b53a94eb8e"} Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.816440 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-mxxjn" event={"ID":"87bf6602-f74b-49e8-874d-1ea9bbf6ec7b","Type":"ContainerDied","Data":"5392d99607be9caa3dcca56e79355f5d3f4b52912a875e57dd3e8b9183e39e94"} Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.816459 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5392d99607be9caa3dcca56e79355f5d3f4b52912a875e57dd3e8b9183e39e94" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.816514 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-mxxjn" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.873786 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" podStartSLOduration=5.873756241 podStartE2EDuration="5.873756241s" podCreationTimestamp="2025-12-09 09:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:52.811694819 +0000 UTC m=+1438.695316045" watchObservedRunningTime="2025-12-09 09:07:52.873756241 +0000 UTC m=+1438.757377467" Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.919585 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:07:52 crc kubenswrapper[4786]: I1209 09:07:52.957939 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6999f7dcbd-6rspb" podStartSLOduration=5.957911372 podStartE2EDuration="5.957911372s" podCreationTimestamp="2025-12-09 09:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:52.8781573 +0000 UTC m=+1438.761778536" watchObservedRunningTime="2025-12-09 09:07:52.957911372 +0000 UTC m=+1438.841532608" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.801674 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5cb696f46f-55kzl"] Dec 09 09:07:53 crc kubenswrapper[4786]: E1209 09:07:53.803221 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6046a22f-fd23-407b-a9ae-0d02d4b41170" containerName="mariadb-database-create" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.803243 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="6046a22f-fd23-407b-a9ae-0d02d4b41170" containerName="mariadb-database-create" Dec 09 09:07:53 crc kubenswrapper[4786]: E1209 09:07:53.803264 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87bf6602-f74b-49e8-874d-1ea9bbf6ec7b" containerName="mariadb-database-create" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.803273 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="87bf6602-f74b-49e8-874d-1ea9bbf6ec7b" containerName="mariadb-database-create" Dec 09 09:07:53 crc kubenswrapper[4786]: E1209 09:07:53.803284 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f678c744-2a47-4f18-8e01-6f438a6e46e5" containerName="mariadb-database-create" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.803291 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f678c744-2a47-4f18-8e01-6f438a6e46e5" containerName="mariadb-database-create" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.805199 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="87bf6602-f74b-49e8-874d-1ea9bbf6ec7b" containerName="mariadb-database-create" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.805240 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f678c744-2a47-4f18-8e01-6f438a6e46e5" containerName="mariadb-database-create" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.805254 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="6046a22f-fd23-407b-a9ae-0d02d4b41170" containerName="mariadb-database-create" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.809015 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.814692 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.814974 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.841817 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cb696f46f-55kzl"] Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.886706 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3dbe789f-f7cf-43e1-8320-51a882827757","Type":"ContainerStarted","Data":"afed836e9d25156d79b3c732d16912dd92339208b7cbcda98e3fb81ad374887e"} Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.893848 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerStarted","Data":"c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe"} Dec 09 09:07:53 crc kubenswrapper[4786]: I1209 09:07:53.901798 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"832b0e68-172e-4166-9826-3d67c426c7e8","Type":"ContainerStarted","Data":"e8a204eacd1658e477dfa9def7e5665f3b6ad0e5ecbcb01f32eb9c06535ace3b"} Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.005270 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-config\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.005414 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-internal-tls-certs\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.005532 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fjqb\" (UniqueName: \"kubernetes.io/projected/d41935c7-99f8-4d52-b0f4-691563bea9ee-kube-api-access-7fjqb\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.005583 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-combined-ca-bundle\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.005634 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-ovndb-tls-certs\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.005668 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-public-tls-certs\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.005695 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-httpd-config\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.046748 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.051059 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-55bfbb9895-lchg7" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.109507 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-ovndb-tls-certs\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.109558 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-public-tls-certs\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.109586 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-httpd-config\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.109682 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-config\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.109725 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-internal-tls-certs\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.109759 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fjqb\" (UniqueName: \"kubernetes.io/projected/d41935c7-99f8-4d52-b0f4-691563bea9ee-kube-api-access-7fjqb\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.109794 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-combined-ca-bundle\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.123599 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-combined-ca-bundle\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.126284 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-httpd-config\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.134147 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-internal-tls-certs\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.141175 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-ovndb-tls-certs\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.141638 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-public-tls-certs\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.142365 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d41935c7-99f8-4d52-b0f4-691563bea9ee-config\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.151692 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fjqb\" (UniqueName: \"kubernetes.io/projected/d41935c7-99f8-4d52-b0f4-691563bea9ee-kube-api-access-7fjqb\") pod \"neutron-5cb696f46f-55kzl\" (UID: \"d41935c7-99f8-4d52-b0f4-691563bea9ee\") " pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.168898 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.978752 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"bb8b460d-7b22-4853-b592-ea61d203e5c1","Type":"ContainerStarted","Data":"430dd2a87f9054835b7ecf20b27df662c0a512116e59e2858b337d4e66edaaad"} Dec 09 09:07:54 crc kubenswrapper[4786]: I1209 09:07:54.981344 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 09 09:07:55 crc kubenswrapper[4786]: I1209 09:07:55.003856 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:07:55 crc kubenswrapper[4786]: I1209 09:07:55.003997 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:07:55 crc kubenswrapper[4786]: I1209 09:07:55.044410 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=7.044386645 podStartE2EDuration="7.044386645s" podCreationTimestamp="2025-12-09 09:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:55.024915262 +0000 UTC m=+1440.908536488" watchObservedRunningTime="2025-12-09 09:07:55.044386645 +0000 UTC m=+1440.928007871" Dec 09 09:07:55 crc kubenswrapper[4786]: I1209 09:07:55.044589 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"0afc7d13-3b0f-4919-ab29-4d328c815a8a","Type":"ContainerStarted","Data":"9cce1aca8aa4129c63f5df309d8679fb877e2e9d6da207eb646fe4df7b2a0c5c"} Dec 09 09:07:55 crc kubenswrapper[4786]: I1209 09:07:55.077446 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerStarted","Data":"ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e"} Dec 09 09:07:55 crc kubenswrapper[4786]: I1209 09:07:55.121398 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=7.121374599 podStartE2EDuration="7.121374599s" podCreationTimestamp="2025-12-09 09:07:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:55.112563089 +0000 UTC m=+1440.996184315" watchObservedRunningTime="2025-12-09 09:07:55.121374599 +0000 UTC m=+1441.004995825" Dec 09 09:07:55 crc kubenswrapper[4786]: W1209 09:07:55.319873 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd41935c7_99f8_4d52_b0f4_691563bea9ee.slice/crio-54f1ebc24d0132fea8a8a94aed66c2a5414180d4d3f4768f09b52bd9736933fc WatchSource:0}: Error finding container 54f1ebc24d0132fea8a8a94aed66c2a5414180d4d3f4768f09b52bd9736933fc: Status 404 returned error can't find the container with id 54f1ebc24d0132fea8a8a94aed66c2a5414180d4d3f4768f09b52bd9736933fc Dec 09 09:07:55 crc kubenswrapper[4786]: I1209 09:07:55.492076 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5cb696f46f-55kzl"] Dec 09 09:07:55 crc kubenswrapper[4786]: I1209 09:07:55.951382 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.095299 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3dbe789f-f7cf-43e1-8320-51a882827757","Type":"ContainerStarted","Data":"fb17e551d4d23179103e65d434bd31a7c132578786a563644dbc83347f13211a"} Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.095581 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3dbe789f-f7cf-43e1-8320-51a882827757" containerName="glance-log" containerID="cri-o://afed836e9d25156d79b3c732d16912dd92339208b7cbcda98e3fb81ad374887e" gracePeriod=30 Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.095676 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="3dbe789f-f7cf-43e1-8320-51a882827757" containerName="glance-httpd" containerID="cri-o://fb17e551d4d23179103e65d434bd31a7c132578786a563644dbc83347f13211a" gracePeriod=30 Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.102201 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cb696f46f-55kzl" event={"ID":"d41935c7-99f8-4d52-b0f4-691563bea9ee","Type":"ContainerStarted","Data":"42149675c81b7770c8135b39206da79cadcdf21c2cefca4f9540cd26f8158ba0"} Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.102260 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cb696f46f-55kzl" event={"ID":"d41935c7-99f8-4d52-b0f4-691563bea9ee","Type":"ContainerStarted","Data":"54f1ebc24d0132fea8a8a94aed66c2a5414180d4d3f4768f09b52bd9736933fc"} Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.108851 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="832b0e68-172e-4166-9826-3d67c426c7e8" containerName="glance-log" containerID="cri-o://e8a204eacd1658e477dfa9def7e5665f3b6ad0e5ecbcb01f32eb9c06535ace3b" gracePeriod=30 Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.109129 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"832b0e68-172e-4166-9826-3d67c426c7e8","Type":"ContainerStarted","Data":"f54a6793eca712ab4491c8ee4273b9e5aa2794e40c7d7331fea7649ae0c85895"} Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.109739 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="832b0e68-172e-4166-9826-3d67c426c7e8" containerName="glance-httpd" containerID="cri-o://f54a6793eca712ab4491c8ee4273b9e5aa2794e40c7d7331fea7649ae0c85895" gracePeriod=30 Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.132702 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.132680971 podStartE2EDuration="9.132680971s" podCreationTimestamp="2025-12-09 09:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:56.128981259 +0000 UTC m=+1442.012602485" watchObservedRunningTime="2025-12-09 09:07:56.132680971 +0000 UTC m=+1442.016302197" Dec 09 09:07:56 crc kubenswrapper[4786]: I1209 09:07:56.158635 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=9.158613855 podStartE2EDuration="9.158613855s" podCreationTimestamp="2025-12-09 09:07:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:56.152389081 +0000 UTC m=+1442.036010307" watchObservedRunningTime="2025-12-09 09:07:56.158613855 +0000 UTC m=+1442.042235081" Dec 09 09:07:57 crc kubenswrapper[4786]: I1209 09:07:57.126091 4786 generic.go:334] "Generic (PLEG): container finished" podID="832b0e68-172e-4166-9826-3d67c426c7e8" containerID="f54a6793eca712ab4491c8ee4273b9e5aa2794e40c7d7331fea7649ae0c85895" exitCode=0 Dec 09 09:07:57 crc kubenswrapper[4786]: I1209 09:07:57.126482 4786 generic.go:334] "Generic (PLEG): container finished" podID="832b0e68-172e-4166-9826-3d67c426c7e8" containerID="e8a204eacd1658e477dfa9def7e5665f3b6ad0e5ecbcb01f32eb9c06535ace3b" exitCode=143 Dec 09 09:07:57 crc kubenswrapper[4786]: I1209 09:07:57.126157 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"832b0e68-172e-4166-9826-3d67c426c7e8","Type":"ContainerDied","Data":"f54a6793eca712ab4491c8ee4273b9e5aa2794e40c7d7331fea7649ae0c85895"} Dec 09 09:07:57 crc kubenswrapper[4786]: I1209 09:07:57.126578 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"832b0e68-172e-4166-9826-3d67c426c7e8","Type":"ContainerDied","Data":"e8a204eacd1658e477dfa9def7e5665f3b6ad0e5ecbcb01f32eb9c06535ace3b"} Dec 09 09:07:57 crc kubenswrapper[4786]: I1209 09:07:57.133963 4786 generic.go:334] "Generic (PLEG): container finished" podID="3dbe789f-f7cf-43e1-8320-51a882827757" containerID="fb17e551d4d23179103e65d434bd31a7c132578786a563644dbc83347f13211a" exitCode=0 Dec 09 09:07:57 crc kubenswrapper[4786]: I1209 09:07:57.134004 4786 generic.go:334] "Generic (PLEG): container finished" podID="3dbe789f-f7cf-43e1-8320-51a882827757" containerID="afed836e9d25156d79b3c732d16912dd92339208b7cbcda98e3fb81ad374887e" exitCode=143 Dec 09 09:07:57 crc kubenswrapper[4786]: I1209 09:07:57.134028 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3dbe789f-f7cf-43e1-8320-51a882827757","Type":"ContainerDied","Data":"fb17e551d4d23179103e65d434bd31a7c132578786a563644dbc83347f13211a"} Dec 09 09:07:57 crc kubenswrapper[4786]: I1209 09:07:57.134087 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3dbe789f-f7cf-43e1-8320-51a882827757","Type":"ContainerDied","Data":"afed836e9d25156d79b3c732d16912dd92339208b7cbcda98e3fb81ad374887e"} Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.161903 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5cb696f46f-55kzl" event={"ID":"d41935c7-99f8-4d52-b0f4-691563bea9ee","Type":"ContainerStarted","Data":"6196dfb3a4eda49eb3eb6e77bfcdd4fc9b30dfd1e7e6378ff56a910f48b6917a"} Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.162902 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.305885 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5cb696f46f-55kzl" podStartSLOduration=5.305854048 podStartE2EDuration="5.305854048s" podCreationTimestamp="2025-12-09 09:07:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:07:58.182056602 +0000 UTC m=+1444.065677828" watchObservedRunningTime="2025-12-09 09:07:58.305854048 +0000 UTC m=+1444.189475284" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.403251 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.585000 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf5d89b49-4txgz"] Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.585727 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" podUID="4dead0f7-489a-4677-afed-bcb93a525277" containerName="dnsmasq-dns" containerID="cri-o://dd289809c662109ed9bfb3ab9eb04fadc3f98668cc99d8a8a4ece7482318a892" gracePeriod=10 Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.655476 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.696380 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.704084 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780296 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-httpd-run\") pod \"832b0e68-172e-4166-9826-3d67c426c7e8\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780380 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9mth\" (UniqueName: \"kubernetes.io/projected/832b0e68-172e-4166-9826-3d67c426c7e8-kube-api-access-h9mth\") pod \"832b0e68-172e-4166-9826-3d67c426c7e8\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780439 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-scripts\") pod \"832b0e68-172e-4166-9826-3d67c426c7e8\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780559 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"3dbe789f-f7cf-43e1-8320-51a882827757\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780595 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-config-data\") pod \"3dbe789f-f7cf-43e1-8320-51a882827757\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780642 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"832b0e68-172e-4166-9826-3d67c426c7e8\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780676 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-scripts\") pod \"3dbe789f-f7cf-43e1-8320-51a882827757\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780769 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-logs\") pod \"3dbe789f-f7cf-43e1-8320-51a882827757\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780800 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-httpd-run\") pod \"3dbe789f-f7cf-43e1-8320-51a882827757\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780909 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-combined-ca-bundle\") pod \"832b0e68-172e-4166-9826-3d67c426c7e8\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780943 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-logs\") pod \"832b0e68-172e-4166-9826-3d67c426c7e8\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780944 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "832b0e68-172e-4166-9826-3d67c426c7e8" (UID: "832b0e68-172e-4166-9826-3d67c426c7e8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.780983 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-config-data\") pod \"832b0e68-172e-4166-9826-3d67c426c7e8\" (UID: \"832b0e68-172e-4166-9826-3d67c426c7e8\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.781014 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcrkt\" (UniqueName: \"kubernetes.io/projected/3dbe789f-f7cf-43e1-8320-51a882827757-kube-api-access-mcrkt\") pod \"3dbe789f-f7cf-43e1-8320-51a882827757\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.781076 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-combined-ca-bundle\") pod \"3dbe789f-f7cf-43e1-8320-51a882827757\" (UID: \"3dbe789f-f7cf-43e1-8320-51a882827757\") " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.781777 4786 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.782336 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-logs" (OuterVolumeSpecName: "logs") pod "832b0e68-172e-4166-9826-3d67c426c7e8" (UID: "832b0e68-172e-4166-9826-3d67c426c7e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.782982 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-logs" (OuterVolumeSpecName: "logs") pod "3dbe789f-f7cf-43e1-8320-51a882827757" (UID: "3dbe789f-f7cf-43e1-8320-51a882827757"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.784457 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "3dbe789f-f7cf-43e1-8320-51a882827757" (UID: "3dbe789f-f7cf-43e1-8320-51a882827757"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.792267 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "3dbe789f-f7cf-43e1-8320-51a882827757" (UID: "3dbe789f-f7cf-43e1-8320-51a882827757"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.792610 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-scripts" (OuterVolumeSpecName: "scripts") pod "3dbe789f-f7cf-43e1-8320-51a882827757" (UID: "3dbe789f-f7cf-43e1-8320-51a882827757"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.795985 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/832b0e68-172e-4166-9826-3d67c426c7e8-kube-api-access-h9mth" (OuterVolumeSpecName: "kube-api-access-h9mth") pod "832b0e68-172e-4166-9826-3d67c426c7e8" (UID: "832b0e68-172e-4166-9826-3d67c426c7e8"). InnerVolumeSpecName "kube-api-access-h9mth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.797561 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "832b0e68-172e-4166-9826-3d67c426c7e8" (UID: "832b0e68-172e-4166-9826-3d67c426c7e8"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.798643 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-scripts" (OuterVolumeSpecName: "scripts") pod "832b0e68-172e-4166-9826-3d67c426c7e8" (UID: "832b0e68-172e-4166-9826-3d67c426c7e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.813761 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dbe789f-f7cf-43e1-8320-51a882827757-kube-api-access-mcrkt" (OuterVolumeSpecName: "kube-api-access-mcrkt") pod "3dbe789f-f7cf-43e1-8320-51a882827757" (UID: "3dbe789f-f7cf-43e1-8320-51a882827757"). InnerVolumeSpecName "kube-api-access-mcrkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.823250 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3dbe789f-f7cf-43e1-8320-51a882827757" (UID: "3dbe789f-f7cf-43e1-8320-51a882827757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.875488 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "832b0e68-172e-4166-9826-3d67c426c7e8" (UID: "832b0e68-172e-4166-9826-3d67c426c7e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889050 4786 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889094 4786 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889111 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889127 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889138 4786 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/3dbe789f-f7cf-43e1-8320-51a882827757-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889152 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889167 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/832b0e68-172e-4166-9826-3d67c426c7e8-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889178 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcrkt\" (UniqueName: \"kubernetes.io/projected/3dbe789f-f7cf-43e1-8320-51a882827757-kube-api-access-mcrkt\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889192 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889206 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9mth\" (UniqueName: \"kubernetes.io/projected/832b0e68-172e-4166-9826-3d67c426c7e8-kube-api-access-h9mth\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.889217 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.912339 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-config-data" (OuterVolumeSpecName: "config-data") pod "3dbe789f-f7cf-43e1-8320-51a882827757" (UID: "3dbe789f-f7cf-43e1-8320-51a882827757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.944565 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-config-data" (OuterVolumeSpecName: "config-data") pod "832b0e68-172e-4166-9826-3d67c426c7e8" (UID: "832b0e68-172e-4166-9826-3d67c426c7e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.951771 4786 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.991657 4786 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.991697 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dbe789f-f7cf-43e1-8320-51a882827757-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.991708 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/832b0e68-172e-4166-9826-3d67c426c7e8-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:58 crc kubenswrapper[4786]: I1209 09:07:58.993405 4786 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.093847 4786 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.117912 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.183808 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"832b0e68-172e-4166-9826-3d67c426c7e8","Type":"ContainerDied","Data":"5eefe8d116ff5efe35e8c489d1eae0d0b51741ab12add32daa040c3dbfb15fd8"} Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.183911 4786 scope.go:117] "RemoveContainer" containerID="f54a6793eca712ab4491c8ee4273b9e5aa2794e40c7d7331fea7649ae0c85895" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.184161 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.226272 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.232893 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"3dbe789f-f7cf-43e1-8320-51a882827757","Type":"ContainerDied","Data":"04bca1ea63b45d8507a092078a9fb150f841c59fa327575b5e39e916c670d8de"} Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.239657 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"2423b332-8b9b-4a26-996b-582194eca3b7","Type":"ContainerStarted","Data":"8f8e08f38771332176e997eda01e676dc4d33960f54c453ad66da549d9f6fa02"} Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.288088 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerStarted","Data":"87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8"} Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.288716 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="ceilometer-central-agent" containerID="cri-o://2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc" gracePeriod=30 Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.288895 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.288958 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="proxy-httpd" containerID="cri-o://87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8" gracePeriod=30 Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.289005 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="sg-core" containerID="cri-o://ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e" gracePeriod=30 Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.289050 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="ceilometer-notification-agent" containerID="cri-o://c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe" gracePeriod=30 Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.312013 4786 scope.go:117] "RemoveContainer" containerID="e8a204eacd1658e477dfa9def7e5665f3b6ad0e5ecbcb01f32eb9c06535ace3b" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.321059 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.321688 4786 generic.go:334] "Generic (PLEG): container finished" podID="4dead0f7-489a-4677-afed-bcb93a525277" containerID="dd289809c662109ed9bfb3ab9eb04fadc3f98668cc99d8a8a4ece7482318a892" exitCode=0 Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.322756 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" event={"ID":"4dead0f7-489a-4677-afed-bcb93a525277","Type":"ContainerDied","Data":"dd289809c662109ed9bfb3ab9eb04fadc3f98668cc99d8a8a4ece7482318a892"} Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.347893 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.349862 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.691355201 podStartE2EDuration="42.349834893s" podCreationTimestamp="2025-12-09 09:07:17 +0000 UTC" firstStartedPulling="2025-12-09 09:07:18.745678329 +0000 UTC m=+1404.629299555" lastFinishedPulling="2025-12-09 09:07:58.404158011 +0000 UTC m=+1444.287779247" observedRunningTime="2025-12-09 09:07:59.270527892 +0000 UTC m=+1445.154149118" watchObservedRunningTime="2025-12-09 09:07:59.349834893 +0000 UTC m=+1445.233456119" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.417997 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:07:59 crc kubenswrapper[4786]: E1209 09:07:59.418729 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbe789f-f7cf-43e1-8320-51a882827757" containerName="glance-log" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.418753 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbe789f-f7cf-43e1-8320-51a882827757" containerName="glance-log" Dec 09 09:07:59 crc kubenswrapper[4786]: E1209 09:07:59.418766 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832b0e68-172e-4166-9826-3d67c426c7e8" containerName="glance-httpd" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.418774 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="832b0e68-172e-4166-9826-3d67c426c7e8" containerName="glance-httpd" Dec 09 09:07:59 crc kubenswrapper[4786]: E1209 09:07:59.418807 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="832b0e68-172e-4166-9826-3d67c426c7e8" containerName="glance-log" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.418813 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="832b0e68-172e-4166-9826-3d67c426c7e8" containerName="glance-log" Dec 09 09:07:59 crc kubenswrapper[4786]: E1209 09:07:59.418841 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dbe789f-f7cf-43e1-8320-51a882827757" containerName="glance-httpd" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.418847 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dbe789f-f7cf-43e1-8320-51a882827757" containerName="glance-httpd" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.419106 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="832b0e68-172e-4166-9826-3d67c426c7e8" containerName="glance-httpd" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.419131 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dbe789f-f7cf-43e1-8320-51a882827757" containerName="glance-log" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.419145 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="832b0e68-172e-4166-9826-3d67c426c7e8" containerName="glance-log" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.419160 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dbe789f-f7cf-43e1-8320-51a882827757" containerName="glance-httpd" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.420979 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.424461 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.424717 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.425019 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-h4v94" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.425088 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.464234 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.476731 4786 scope.go:117] "RemoveContainer" containerID="fb17e551d4d23179103e65d434bd31a7c132578786a563644dbc83347f13211a" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.485575 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.494167 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.508297 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.508376 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjcsh\" (UniqueName: \"kubernetes.io/projected/b9361250-e7b7-4b9e-aad9-1889992aca31-kube-api-access-mjcsh\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.508710 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.508879 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.509114 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.509187 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.509381 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.509661 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.537974 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.541921 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.545173 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.545483 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.546938 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.260491471 podStartE2EDuration="11.54690974s" podCreationTimestamp="2025-12-09 09:07:48 +0000 UTC" firstStartedPulling="2025-12-09 09:07:50.467652176 +0000 UTC m=+1436.351273402" lastFinishedPulling="2025-12-09 09:07:57.754070445 +0000 UTC m=+1443.637691671" observedRunningTime="2025-12-09 09:07:59.353862842 +0000 UTC m=+1445.237484068" watchObservedRunningTime="2025-12-09 09:07:59.54690974 +0000 UTC m=+1445.430530966" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.574912 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.611901 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.611964 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612022 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612066 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612093 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612128 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612175 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfskf\" (UniqueName: \"kubernetes.io/projected/eb32aa4a-8085-424e-be11-ed2ee3186cb6-kube-api-access-dfskf\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612225 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-logs\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612272 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612296 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612316 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjcsh\" (UniqueName: \"kubernetes.io/projected/b9361250-e7b7-4b9e-aad9-1889992aca31-kube-api-access-mjcsh\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612343 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612366 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612391 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612409 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.612445 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.616614 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.621447 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.621575 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.622181 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.622460 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-logs\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.625982 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-config-data\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.635507 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-scripts\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: E1209 09:07:59.659407 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod832b0e68_172e_4166_9826_3d67c426c7e8.slice/crio-5eefe8d116ff5efe35e8c489d1eae0d0b51741ab12add32daa040c3dbfb15fd8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2873c41b_4ee4_4e9d_b2c1_fac1a9cddcf3.slice/crio-conmon-ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dbe789f_f7cf_43e1_8320_51a882827757.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod832b0e68_172e_4166_9826_3d67c426c7e8.slice\": RecentStats: unable to find data in memory cache]" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.675540 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjcsh\" (UniqueName: \"kubernetes.io/projected/b9361250-e7b7-4b9e-aad9-1889992aca31-kube-api-access-mjcsh\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.704675 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.717618 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.718009 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.718046 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.718113 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.718163 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.718208 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfskf\" (UniqueName: \"kubernetes.io/projected/eb32aa4a-8085-424e-be11-ed2ee3186cb6-kube-api-access-dfskf\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.718262 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-logs\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.718316 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.719900 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.727970 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-e5da-account-create-9krl4"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.731366 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e5da-account-create-9krl4" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.736228 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.736388 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.736481 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-logs\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.736878 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-config-data\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.738174 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.740357 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.744048 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-scripts\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.763359 4786 scope.go:117] "RemoveContainer" containerID="afed836e9d25156d79b3c732d16912dd92339208b7cbcda98e3fb81ad374887e" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.768289 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.770181 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfskf\" (UniqueName: \"kubernetes.io/projected/eb32aa4a-8085-424e-be11-ed2ee3186cb6-kube-api-access-dfskf\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.793965 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.817673 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e5da-account-create-9krl4"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.827674 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbxw4\" (UniqueName: \"kubernetes.io/projected/c9d4fc2b-79a8-426f-8540-f5187065225a-kube-api-access-qbxw4\") pod \"nova-api-e5da-account-create-9krl4\" (UID: \"c9d4fc2b-79a8-426f-8540-f5187065225a\") " pod="openstack/nova-api-e5da-account-create-9krl4" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.872502 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-be56-account-create-w8ntw"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.873964 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be56-account-create-w8ntw" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.880017 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.880018 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-be56-account-create-w8ntw"] Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.931968 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkmwt\" (UniqueName: \"kubernetes.io/projected/7734cf63-1abe-49b9-a5f9-d67192a52bad-kube-api-access-xkmwt\") pod \"nova-cell0-be56-account-create-w8ntw\" (UID: \"7734cf63-1abe-49b9-a5f9-d67192a52bad\") " pod="openstack/nova-cell0-be56-account-create-w8ntw" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.932629 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbxw4\" (UniqueName: \"kubernetes.io/projected/c9d4fc2b-79a8-426f-8540-f5187065225a-kube-api-access-qbxw4\") pod \"nova-api-e5da-account-create-9krl4\" (UID: \"c9d4fc2b-79a8-426f-8540-f5187065225a\") " pod="openstack/nova-api-e5da-account-create-9krl4" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.949887 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:07:59 crc kubenswrapper[4786]: I1209 09:07:59.975455 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbxw4\" (UniqueName: \"kubernetes.io/projected/c9d4fc2b-79a8-426f-8540-f5187065225a-kube-api-access-qbxw4\") pod \"nova-api-e5da-account-create-9krl4\" (UID: \"c9d4fc2b-79a8-426f-8540-f5187065225a\") " pod="openstack/nova-api-e5da-account-create-9krl4" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.006271 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.034457 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-config\") pod \"4dead0f7-489a-4677-afed-bcb93a525277\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.034579 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-sb\") pod \"4dead0f7-489a-4677-afed-bcb93a525277\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.034667 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-swift-storage-0\") pod \"4dead0f7-489a-4677-afed-bcb93a525277\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.034724 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms8c6\" (UniqueName: \"kubernetes.io/projected/4dead0f7-489a-4677-afed-bcb93a525277-kube-api-access-ms8c6\") pod \"4dead0f7-489a-4677-afed-bcb93a525277\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.034763 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-nb\") pod \"4dead0f7-489a-4677-afed-bcb93a525277\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.034990 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-svc\") pod \"4dead0f7-489a-4677-afed-bcb93a525277\" (UID: \"4dead0f7-489a-4677-afed-bcb93a525277\") " Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.035719 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkmwt\" (UniqueName: \"kubernetes.io/projected/7734cf63-1abe-49b9-a5f9-d67192a52bad-kube-api-access-xkmwt\") pod \"nova-cell0-be56-account-create-w8ntw\" (UID: \"7734cf63-1abe-49b9-a5f9-d67192a52bad\") " pod="openstack/nova-cell0-be56-account-create-w8ntw" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.146196 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkmwt\" (UniqueName: \"kubernetes.io/projected/7734cf63-1abe-49b9-a5f9-d67192a52bad-kube-api-access-xkmwt\") pod \"nova-cell0-be56-account-create-w8ntw\" (UID: \"7734cf63-1abe-49b9-a5f9-d67192a52bad\") " pod="openstack/nova-cell0-be56-account-create-w8ntw" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.152098 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be56-account-create-w8ntw" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.171655 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dead0f7-489a-4677-afed-bcb93a525277-kube-api-access-ms8c6" (OuterVolumeSpecName: "kube-api-access-ms8c6") pod "4dead0f7-489a-4677-afed-bcb93a525277" (UID: "4dead0f7-489a-4677-afed-bcb93a525277"). InnerVolumeSpecName "kube-api-access-ms8c6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.224243 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cfbc-account-create-tmsln"] Dec 09 09:08:00 crc kubenswrapper[4786]: E1209 09:08:00.225380 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dead0f7-489a-4677-afed-bcb93a525277" containerName="dnsmasq-dns" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.225403 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dead0f7-489a-4677-afed-bcb93a525277" containerName="dnsmasq-dns" Dec 09 09:08:00 crc kubenswrapper[4786]: E1209 09:08:00.225448 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dead0f7-489a-4677-afed-bcb93a525277" containerName="init" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.225455 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dead0f7-489a-4677-afed-bcb93a525277" containerName="init" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.237042 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dead0f7-489a-4677-afed-bcb93a525277" containerName="dnsmasq-dns" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.237792 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-config" (OuterVolumeSpecName: "config") pod "4dead0f7-489a-4677-afed-bcb93a525277" (UID: "4dead0f7-489a-4677-afed-bcb93a525277"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.238646 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cfbc-account-create-tmsln" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.239328 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4dead0f7-489a-4677-afed-bcb93a525277" (UID: "4dead0f7-489a-4677-afed-bcb93a525277"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.254669 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.256063 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e5da-account-create-9krl4" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.256996 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms8c6\" (UniqueName: \"kubernetes.io/projected/4dead0f7-489a-4677-afed-bcb93a525277-kube-api-access-ms8c6\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.287726 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4dead0f7-489a-4677-afed-bcb93a525277" (UID: "4dead0f7-489a-4677-afed-bcb93a525277"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.298163 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4dead0f7-489a-4677-afed-bcb93a525277" (UID: "4dead0f7-489a-4677-afed-bcb93a525277"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.301272 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cfbc-account-create-tmsln"] Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.305460 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "4dead0f7-489a-4677-afed-bcb93a525277" (UID: "4dead0f7-489a-4677-afed-bcb93a525277"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.362571 4786 generic.go:334] "Generic (PLEG): container finished" podID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerID="87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8" exitCode=0 Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.362610 4786 generic.go:334] "Generic (PLEG): container finished" podID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerID="ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e" exitCode=2 Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.362667 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerDied","Data":"87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8"} Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.362701 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerDied","Data":"ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e"} Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.363647 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p765\" (UniqueName: \"kubernetes.io/projected/0461c86e-7dbb-490a-b72b-cd19765acb95-kube-api-access-8p765\") pod \"nova-cell1-cfbc-account-create-tmsln\" (UID: \"0461c86e-7dbb-490a-b72b-cd19765acb95\") " pod="openstack/nova-cell1-cfbc-account-create-tmsln" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.365350 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.366226 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.366328 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.366415 4786 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.366574 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4dead0f7-489a-4677-afed-bcb93a525277-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.368124 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" event={"ID":"4dead0f7-489a-4677-afed-bcb93a525277","Type":"ContainerDied","Data":"2f3866d76857c12f46997f4f5f6f6ce111eac90cf494e2e8fb4181e6189f02a0"} Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.368468 4786 scope.go:117] "RemoveContainer" containerID="dd289809c662109ed9bfb3ab9eb04fadc3f98668cc99d8a8a4ece7482318a892" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.369202 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf5d89b49-4txgz" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.439346 4786 scope.go:117] "RemoveContainer" containerID="4bf3094ca8d377846c752bc687421ae11e58dd722d8518f1f53cd0e922efdcc1" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.446473 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf5d89b49-4txgz"] Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.460045 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf5d89b49-4txgz"] Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.470091 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p765\" (UniqueName: \"kubernetes.io/projected/0461c86e-7dbb-490a-b72b-cd19765acb95-kube-api-access-8p765\") pod \"nova-cell1-cfbc-account-create-tmsln\" (UID: \"0461c86e-7dbb-490a-b72b-cd19765acb95\") " pod="openstack/nova-cell1-cfbc-account-create-tmsln" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.493837 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p765\" (UniqueName: \"kubernetes.io/projected/0461c86e-7dbb-490a-b72b-cd19765acb95-kube-api-access-8p765\") pod \"nova-cell1-cfbc-account-create-tmsln\" (UID: \"0461c86e-7dbb-490a-b72b-cd19765acb95\") " pod="openstack/nova-cell1-cfbc-account-create-tmsln" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.599536 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cfbc-account-create-tmsln" Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.789385 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:08:00 crc kubenswrapper[4786]: I1209 09:08:00.930759 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-be56-account-create-w8ntw"] Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.129938 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-e5da-account-create-9krl4"] Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.173639 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.214351 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dbe789f-f7cf-43e1-8320-51a882827757" path="/var/lib/kubelet/pods/3dbe789f-f7cf-43e1-8320-51a882827757/volumes" Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.215250 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dead0f7-489a-4677-afed-bcb93a525277" path="/var/lib/kubelet/pods/4dead0f7-489a-4677-afed-bcb93a525277/volumes" Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.215965 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="832b0e68-172e-4166-9826-3d67c426c7e8" path="/var/lib/kubelet/pods/832b0e68-172e-4166-9826-3d67c426c7e8/volumes" Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.475102 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9361250-e7b7-4b9e-aad9-1889992aca31","Type":"ContainerStarted","Data":"41890a48e404839c75bf0e1b3f8d8f0b1b203f4ee12a8ff374477748bc573f7b"} Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.486642 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e5da-account-create-9krl4" event={"ID":"c9d4fc2b-79a8-426f-8540-f5187065225a","Type":"ContainerStarted","Data":"f9cbc8e8b73b204d3bc24b384d3d34d439d1f05d8377d49261b72c3fd93d26f3"} Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.522257 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb32aa4a-8085-424e-be11-ed2ee3186cb6","Type":"ContainerStarted","Data":"17421eaf0abc47aed07f0d15ebcf95151af41a40b5c122c9266507dbf5e04e42"} Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.552005 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cfbc-account-create-tmsln"] Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.653615 4786 generic.go:334] "Generic (PLEG): container finished" podID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerID="c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe" exitCode=0 Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.653736 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerDied","Data":"c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe"} Dec 09 09:08:01 crc kubenswrapper[4786]: I1209 09:08:01.701515 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be56-account-create-w8ntw" event={"ID":"7734cf63-1abe-49b9-a5f9-d67192a52bad","Type":"ContainerStarted","Data":"2cbcbc3c0e56ad5b92b9adc687106bb5cdac0506bf974cdcfcf972050752ac1b"} Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.531018 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.565032 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-be56-account-create-w8ntw" podStartSLOduration=3.565006495 podStartE2EDuration="3.565006495s" podCreationTimestamp="2025-12-09 09:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:08:01.761781823 +0000 UTC m=+1447.645403059" watchObservedRunningTime="2025-12-09 09:08:02.565006495 +0000 UTC m=+1448.448627721" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.625136 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-sg-core-conf-yaml\") pod \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.625302 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-config-data\") pod \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.628038 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-run-httpd\") pod \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.628115 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xk4wc\" (UniqueName: \"kubernetes.io/projected/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-kube-api-access-xk4wc\") pod \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.628143 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-combined-ca-bundle\") pod \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.628201 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-scripts\") pod \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.628310 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-log-httpd\") pod \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\" (UID: \"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3\") " Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.629564 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" (UID: "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.644849 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" (UID: "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.646569 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-kube-api-access-xk4wc" (OuterVolumeSpecName: "kube-api-access-xk4wc") pod "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" (UID: "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3"). InnerVolumeSpecName "kube-api-access-xk4wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.656905 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-scripts" (OuterVolumeSpecName: "scripts") pod "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" (UID: "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.732268 4786 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.732689 4786 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.732767 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xk4wc\" (UniqueName: \"kubernetes.io/projected/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-kube-api-access-xk4wc\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.732828 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.749762 4786 generic.go:334] "Generic (PLEG): container finished" podID="c9d4fc2b-79a8-426f-8540-f5187065225a" containerID="2443ef687656a794f9c1575acbd80dea1ee787245cffce4652c8184096da4c87" exitCode=0 Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.749957 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e5da-account-create-9krl4" event={"ID":"c9d4fc2b-79a8-426f-8540-f5187065225a","Type":"ContainerDied","Data":"2443ef687656a794f9c1575acbd80dea1ee787245cffce4652c8184096da4c87"} Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.757131 4786 generic.go:334] "Generic (PLEG): container finished" podID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerID="2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc" exitCode=0 Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.757531 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerDied","Data":"2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc"} Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.758023 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3","Type":"ContainerDied","Data":"a7c45656cb1234dd14f2e3bf31cb8dce7f4e11ff7292be081489fd34d642eb33"} Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.757702 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.758114 4786 scope.go:117] "RemoveContainer" containerID="87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.764359 4786 generic.go:334] "Generic (PLEG): container finished" podID="7734cf63-1abe-49b9-a5f9-d67192a52bad" containerID="77f883f7ee2b8202a0f5cb4617338a919461146cafed5b328914116baf68b604" exitCode=0 Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.764772 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be56-account-create-w8ntw" event={"ID":"7734cf63-1abe-49b9-a5f9-d67192a52bad","Type":"ContainerDied","Data":"77f883f7ee2b8202a0f5cb4617338a919461146cafed5b328914116baf68b604"} Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.774972 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cfbc-account-create-tmsln" event={"ID":"0461c86e-7dbb-490a-b72b-cd19765acb95","Type":"ContainerStarted","Data":"af9f0903c3b496ebeae722c9cdd02088c81456be610a0156dec9a49210679cbe"} Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.775132 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cfbc-account-create-tmsln" event={"ID":"0461c86e-7dbb-490a-b72b-cd19765acb95","Type":"ContainerStarted","Data":"cd708b0afd8130a70b569a55d6fa2e4ca7f5a146296c2d09afed041fbfee3675"} Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.842004 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" (UID: "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.862217 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cfbc-account-create-tmsln" podStartSLOduration=2.86218927 podStartE2EDuration="2.86218927s" podCreationTimestamp="2025-12-09 09:08:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:08:02.859976695 +0000 UTC m=+1448.743597921" watchObservedRunningTime="2025-12-09 09:08:02.86218927 +0000 UTC m=+1448.745810486" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.942017 4786 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:02 crc kubenswrapper[4786]: I1209 09:08:02.978162 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" (UID: "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.031958 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-config-data" (OuterVolumeSpecName: "config-data") pod "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" (UID: "2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.046785 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.046834 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.117544 4786 scope.go:117] "RemoveContainer" containerID="ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.126232 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.135623 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.143697 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:03 crc kubenswrapper[4786]: E1209 09:08:03.144143 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="ceilometer-notification-agent" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.144167 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="ceilometer-notification-agent" Dec 09 09:08:03 crc kubenswrapper[4786]: E1209 09:08:03.144189 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="sg-core" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.144198 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="sg-core" Dec 09 09:08:03 crc kubenswrapper[4786]: E1209 09:08:03.144217 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="ceilometer-central-agent" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.144227 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="ceilometer-central-agent" Dec 09 09:08:03 crc kubenswrapper[4786]: E1209 09:08:03.144248 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="proxy-httpd" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.144254 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="proxy-httpd" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.144465 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="sg-core" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.144840 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="ceilometer-notification-agent" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.144854 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="ceilometer-central-agent" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.144871 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" containerName="proxy-httpd" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.146794 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.156487 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.156846 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.189347 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.219339 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3" path="/var/lib/kubelet/pods/2873c41b-4ee4-4e9d-b2c1-fac1a9cddcf3/volumes" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.221695 4786 scope.go:117] "RemoveContainer" containerID="c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.251114 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-scripts\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.251186 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-run-httpd\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.251227 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.251313 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp8nd\" (UniqueName: \"kubernetes.io/projected/410d5f49-52f4-46bb-bb02-6bc31937cec1-kube-api-access-dp8nd\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.251555 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-log-httpd\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.251674 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.251729 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-config-data\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.322401 4786 scope.go:117] "RemoveContainer" containerID="2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.353949 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-log-httpd\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.354000 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.354026 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-config-data\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.354099 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-scripts\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.354116 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-run-httpd\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.354138 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.354195 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dp8nd\" (UniqueName: \"kubernetes.io/projected/410d5f49-52f4-46bb-bb02-6bc31937cec1-kube-api-access-dp8nd\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.356466 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-log-httpd\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.358190 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-run-httpd\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.361593 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.362984 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.364093 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-scripts\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.364534 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-config-data\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.374920 4786 scope.go:117] "RemoveContainer" containerID="87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8" Dec 09 09:08:03 crc kubenswrapper[4786]: E1209 09:08:03.375734 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8\": container with ID starting with 87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8 not found: ID does not exist" containerID="87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.375783 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8"} err="failed to get container status \"87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8\": rpc error: code = NotFound desc = could not find container \"87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8\": container with ID starting with 87426314a8706119b3419ea6f32b45845b5e7b1034bad1b1d5cedce37a4ddfc8 not found: ID does not exist" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.375815 4786 scope.go:117] "RemoveContainer" containerID="ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e" Dec 09 09:08:03 crc kubenswrapper[4786]: E1209 09:08:03.376075 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e\": container with ID starting with ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e not found: ID does not exist" containerID="ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.376101 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e"} err="failed to get container status \"ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e\": rpc error: code = NotFound desc = could not find container \"ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e\": container with ID starting with ef70436824bee911813610a6151b6dbcf0868630d687e5f993368f20b3a0f52e not found: ID does not exist" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.376115 4786 scope.go:117] "RemoveContainer" containerID="c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe" Dec 09 09:08:03 crc kubenswrapper[4786]: E1209 09:08:03.377306 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe\": container with ID starting with c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe not found: ID does not exist" containerID="c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.377337 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe"} err="failed to get container status \"c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe\": rpc error: code = NotFound desc = could not find container \"c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe\": container with ID starting with c4e3a148eafa8575594134c9c8dfb7978cb0409fa60a045da301bbe88bb9f7fe not found: ID does not exist" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.377352 4786 scope.go:117] "RemoveContainer" containerID="2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc" Dec 09 09:08:03 crc kubenswrapper[4786]: E1209 09:08:03.377634 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc\": container with ID starting with 2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc not found: ID does not exist" containerID="2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.377666 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc"} err="failed to get container status \"2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc\": rpc error: code = NotFound desc = could not find container \"2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc\": container with ID starting with 2f570d5e84493a0401b0d7c3f3ce99a42a45a16fc366e187e7632a5018149cdc not found: ID does not exist" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.378053 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dp8nd\" (UniqueName: \"kubernetes.io/projected/410d5f49-52f4-46bb-bb02-6bc31937cec1-kube-api-access-dp8nd\") pod \"ceilometer-0\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.501918 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.847975 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.854306 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9361250-e7b7-4b9e-aad9-1889992aca31","Type":"ContainerStarted","Data":"ae02065c878b064852c05d9264c5857b2706893dea8ebd4fffa2cd399246b83e"} Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.880468 4786 generic.go:334] "Generic (PLEG): container finished" podID="0461c86e-7dbb-490a-b72b-cd19765acb95" containerID="af9f0903c3b496ebeae722c9cdd02088c81456be610a0156dec9a49210679cbe" exitCode=0 Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.881105 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cfbc-account-create-tmsln" event={"ID":"0461c86e-7dbb-490a-b72b-cd19765acb95","Type":"ContainerDied","Data":"af9f0903c3b496ebeae722c9cdd02088c81456be610a0156dec9a49210679cbe"} Dec 09 09:08:03 crc kubenswrapper[4786]: I1209 09:08:03.886678 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb32aa4a-8085-424e-be11-ed2ee3186cb6","Type":"ContainerStarted","Data":"a3fedd1ca0b1b0bf3e0e976efd18c8224500af8ae193d5e2c65f7f2aba749373"} Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.256041 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.451379 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e5da-account-create-9krl4" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.537728 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be56-account-create-w8ntw" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.601531 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbxw4\" (UniqueName: \"kubernetes.io/projected/c9d4fc2b-79a8-426f-8540-f5187065225a-kube-api-access-qbxw4\") pod \"c9d4fc2b-79a8-426f-8540-f5187065225a\" (UID: \"c9d4fc2b-79a8-426f-8540-f5187065225a\") " Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.610347 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9d4fc2b-79a8-426f-8540-f5187065225a-kube-api-access-qbxw4" (OuterVolumeSpecName: "kube-api-access-qbxw4") pod "c9d4fc2b-79a8-426f-8540-f5187065225a" (UID: "c9d4fc2b-79a8-426f-8540-f5187065225a"). InnerVolumeSpecName "kube-api-access-qbxw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.703602 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkmwt\" (UniqueName: \"kubernetes.io/projected/7734cf63-1abe-49b9-a5f9-d67192a52bad-kube-api-access-xkmwt\") pod \"7734cf63-1abe-49b9-a5f9-d67192a52bad\" (UID: \"7734cf63-1abe-49b9-a5f9-d67192a52bad\") " Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.704149 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbxw4\" (UniqueName: \"kubernetes.io/projected/c9d4fc2b-79a8-426f-8540-f5187065225a-kube-api-access-qbxw4\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.708774 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7734cf63-1abe-49b9-a5f9-d67192a52bad-kube-api-access-xkmwt" (OuterVolumeSpecName: "kube-api-access-xkmwt") pod "7734cf63-1abe-49b9-a5f9-d67192a52bad" (UID: "7734cf63-1abe-49b9-a5f9-d67192a52bad"). InnerVolumeSpecName "kube-api-access-xkmwt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.806804 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkmwt\" (UniqueName: \"kubernetes.io/projected/7734cf63-1abe-49b9-a5f9-d67192a52bad-kube-api-access-xkmwt\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.899088 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb32aa4a-8085-424e-be11-ed2ee3186cb6","Type":"ContainerStarted","Data":"3c69c0c16201f127e683d70cdaaa98e30d84b6f20139e11540a366acce96209d"} Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.900451 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerStarted","Data":"a63537fd69d9cec7bff2e9b7a79afafe214d5417f7bca1f4b3d4e83c42a2ff30"} Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.901949 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-be56-account-create-w8ntw" event={"ID":"7734cf63-1abe-49b9-a5f9-d67192a52bad","Type":"ContainerDied","Data":"2cbcbc3c0e56ad5b92b9adc687106bb5cdac0506bf974cdcfcf972050752ac1b"} Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.901962 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-be56-account-create-w8ntw" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.901975 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cbcbc3c0e56ad5b92b9adc687106bb5cdac0506bf974cdcfcf972050752ac1b" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.905105 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9361250-e7b7-4b9e-aad9-1889992aca31","Type":"ContainerStarted","Data":"df0c4601d0c901fae16c037643fe4c090102e1a426d4617a93f8d6801b7cb0b2"} Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.907547 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-e5da-account-create-9krl4" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.907720 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-e5da-account-create-9krl4" event={"ID":"c9d4fc2b-79a8-426f-8540-f5187065225a","Type":"ContainerDied","Data":"f9cbc8e8b73b204d3bc24b384d3d34d439d1f05d8377d49261b72c3fd93d26f3"} Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.907859 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9cbc8e8b73b204d3bc24b384d3d34d439d1f05d8377d49261b72c3fd93d26f3" Dec 09 09:08:04 crc kubenswrapper[4786]: I1209 09:08:04.947808 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.947790241 podStartE2EDuration="5.947790241s" podCreationTimestamp="2025-12-09 09:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:08:04.940562011 +0000 UTC m=+1450.824183237" watchObservedRunningTime="2025-12-09 09:08:04.947790241 +0000 UTC m=+1450.831411467" Dec 09 09:08:05 crc kubenswrapper[4786]: I1209 09:08:05.046108 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=6.046085694 podStartE2EDuration="6.046085694s" podCreationTimestamp="2025-12-09 09:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:08:05.029975873 +0000 UTC m=+1450.913597099" watchObservedRunningTime="2025-12-09 09:08:05.046085694 +0000 UTC m=+1450.929706910" Dec 09 09:08:05 crc kubenswrapper[4786]: I1209 09:08:05.663564 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cfbc-account-create-tmsln" Dec 09 09:08:05 crc kubenswrapper[4786]: I1209 09:08:05.743516 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p765\" (UniqueName: \"kubernetes.io/projected/0461c86e-7dbb-490a-b72b-cd19765acb95-kube-api-access-8p765\") pod \"0461c86e-7dbb-490a-b72b-cd19765acb95\" (UID: \"0461c86e-7dbb-490a-b72b-cd19765acb95\") " Dec 09 09:08:05 crc kubenswrapper[4786]: I1209 09:08:05.750579 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0461c86e-7dbb-490a-b72b-cd19765acb95-kube-api-access-8p765" (OuterVolumeSpecName: "kube-api-access-8p765") pod "0461c86e-7dbb-490a-b72b-cd19765acb95" (UID: "0461c86e-7dbb-490a-b72b-cd19765acb95"). InnerVolumeSpecName "kube-api-access-8p765". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:05 crc kubenswrapper[4786]: I1209 09:08:05.846051 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p765\" (UniqueName: \"kubernetes.io/projected/0461c86e-7dbb-490a-b72b-cd19765acb95-kube-api-access-8p765\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:05 crc kubenswrapper[4786]: I1209 09:08:05.922138 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cfbc-account-create-tmsln" event={"ID":"0461c86e-7dbb-490a-b72b-cd19765acb95","Type":"ContainerDied","Data":"cd708b0afd8130a70b569a55d6fa2e4ca7f5a146296c2d09afed041fbfee3675"} Dec 09 09:08:05 crc kubenswrapper[4786]: I1209 09:08:05.922208 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd708b0afd8130a70b569a55d6fa2e4ca7f5a146296c2d09afed041fbfee3675" Dec 09 09:08:05 crc kubenswrapper[4786]: I1209 09:08:05.923316 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cfbc-account-create-tmsln" Dec 09 09:08:05 crc kubenswrapper[4786]: I1209 09:08:05.961635 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="bb8b460d-7b22-4853-b592-ea61d203e5c1" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.185:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 09:08:07 crc kubenswrapper[4786]: I1209 09:08:07.301758 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 09 09:08:07 crc kubenswrapper[4786]: I1209 09:08:07.953691 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerStarted","Data":"3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3"} Dec 09 09:08:08 crc kubenswrapper[4786]: I1209 09:08:08.975676 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerStarted","Data":"5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc"} Dec 09 09:08:09 crc kubenswrapper[4786]: I1209 09:08:09.769940 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:09 crc kubenswrapper[4786]: I1209 09:08:09.770619 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:09 crc kubenswrapper[4786]: I1209 09:08:09.816158 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:09 crc kubenswrapper[4786]: I1209 09:08:09.854369 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:09 crc kubenswrapper[4786]: I1209 09:08:09.952949 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 09:08:09 crc kubenswrapper[4786]: I1209 09:08:09.953989 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.028739 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.039529 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerStarted","Data":"cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3"} Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.040677 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.043549 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.043591 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.046163 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.212220 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q4p7n"] Dec 09 09:08:10 crc kubenswrapper[4786]: E1209 09:08:10.213334 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0461c86e-7dbb-490a-b72b-cd19765acb95" containerName="mariadb-account-create" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.213357 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="0461c86e-7dbb-490a-b72b-cd19765acb95" containerName="mariadb-account-create" Dec 09 09:08:10 crc kubenswrapper[4786]: E1209 09:08:10.213387 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9d4fc2b-79a8-426f-8540-f5187065225a" containerName="mariadb-account-create" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.213395 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9d4fc2b-79a8-426f-8540-f5187065225a" containerName="mariadb-account-create" Dec 09 09:08:10 crc kubenswrapper[4786]: E1209 09:08:10.213407 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7734cf63-1abe-49b9-a5f9-d67192a52bad" containerName="mariadb-account-create" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.213416 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7734cf63-1abe-49b9-a5f9-d67192a52bad" containerName="mariadb-account-create" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.213696 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="0461c86e-7dbb-490a-b72b-cd19765acb95" containerName="mariadb-account-create" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.213721 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7734cf63-1abe-49b9-a5f9-d67192a52bad" containerName="mariadb-account-create" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.213735 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9d4fc2b-79a8-426f-8540-f5187065225a" containerName="mariadb-account-create" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.214762 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.222137 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.222274 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dcl2b" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.222587 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.270315 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q4p7n"] Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.276837 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-scripts\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.276905 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.276943 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zncdk\" (UniqueName: \"kubernetes.io/projected/f5beb8b6-6d1a-45cd-94cf-81754c4db040-kube-api-access-zncdk\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.277262 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-config-data\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.380498 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.380594 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zncdk\" (UniqueName: \"kubernetes.io/projected/f5beb8b6-6d1a-45cd-94cf-81754c4db040-kube-api-access-zncdk\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.380727 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-config-data\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.380856 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-scripts\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.388109 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.389025 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-config-data\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.397219 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-scripts\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.408118 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zncdk\" (UniqueName: \"kubernetes.io/projected/f5beb8b6-6d1a-45cd-94cf-81754c4db040-kube-api-access-zncdk\") pod \"nova-cell0-conductor-db-sync-q4p7n\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:10 crc kubenswrapper[4786]: I1209 09:08:10.627810 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:11 crc kubenswrapper[4786]: I1209 09:08:11.115226 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="ceilometer-central-agent" containerID="cri-o://3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3" gracePeriod=30 Dec 09 09:08:11 crc kubenswrapper[4786]: I1209 09:08:11.115799 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerStarted","Data":"d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141"} Dec 09 09:08:11 crc kubenswrapper[4786]: I1209 09:08:11.115831 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 09:08:11 crc kubenswrapper[4786]: I1209 09:08:11.116335 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 09:08:11 crc kubenswrapper[4786]: I1209 09:08:11.116680 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="proxy-httpd" containerID="cri-o://d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141" gracePeriod=30 Dec 09 09:08:11 crc kubenswrapper[4786]: I1209 09:08:11.116737 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="sg-core" containerID="cri-o://cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3" gracePeriod=30 Dec 09 09:08:11 crc kubenswrapper[4786]: I1209 09:08:11.116785 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="ceilometer-notification-agent" containerID="cri-o://5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc" gracePeriod=30 Dec 09 09:08:11 crc kubenswrapper[4786]: I1209 09:08:11.151359 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.214428107 podStartE2EDuration="8.151339338s" podCreationTimestamp="2025-12-09 09:08:03 +0000 UTC" firstStartedPulling="2025-12-09 09:08:04.269834573 +0000 UTC m=+1450.153455789" lastFinishedPulling="2025-12-09 09:08:10.206745804 +0000 UTC m=+1456.090367020" observedRunningTime="2025-12-09 09:08:11.148751025 +0000 UTC m=+1457.032372271" watchObservedRunningTime="2025-12-09 09:08:11.151339338 +0000 UTC m=+1457.034960574" Dec 09 09:08:11 crc kubenswrapper[4786]: I1209 09:08:11.225705 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q4p7n"] Dec 09 09:08:11 crc kubenswrapper[4786]: W1209 09:08:11.233136 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf5beb8b6_6d1a_45cd_94cf_81754c4db040.slice/crio-e02eb34bcd202092eecc058d5da78df27d9e2a6519dfbf91e0b0ebe37c2c8e5d WatchSource:0}: Error finding container e02eb34bcd202092eecc058d5da78df27d9e2a6519dfbf91e0b0ebe37c2c8e5d: Status 404 returned error can't find the container with id e02eb34bcd202092eecc058d5da78df27d9e2a6519dfbf91e0b0ebe37c2c8e5d Dec 09 09:08:12 crc kubenswrapper[4786]: I1209 09:08:12.129950 4786 generic.go:334] "Generic (PLEG): container finished" podID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerID="d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141" exitCode=0 Dec 09 09:08:12 crc kubenswrapper[4786]: I1209 09:08:12.130213 4786 generic.go:334] "Generic (PLEG): container finished" podID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerID="cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3" exitCode=2 Dec 09 09:08:12 crc kubenswrapper[4786]: I1209 09:08:12.130257 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerDied","Data":"d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141"} Dec 09 09:08:12 crc kubenswrapper[4786]: I1209 09:08:12.130286 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerDied","Data":"cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3"} Dec 09 09:08:12 crc kubenswrapper[4786]: I1209 09:08:12.132832 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:12 crc kubenswrapper[4786]: I1209 09:08:12.133803 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-q4p7n" event={"ID":"f5beb8b6-6d1a-45cd-94cf-81754c4db040","Type":"ContainerStarted","Data":"e02eb34bcd202092eecc058d5da78df27d9e2a6519dfbf91e0b0ebe37c2c8e5d"} Dec 09 09:08:12 crc kubenswrapper[4786]: I1209 09:08:12.578187 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.162263 4786 generic.go:334] "Generic (PLEG): container finished" podID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerID="5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc" exitCode=0 Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.162466 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.162490 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.162755 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerDied","Data":"5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc"} Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.842634 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.842761 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.842772 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.842903 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-log" containerID="cri-o://ae02065c878b064852c05d9264c5857b2706893dea8ebd4fffa2cd399246b83e" gracePeriod=30 Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.843071 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-httpd" containerID="cri-o://df0c4601d0c901fae16c037643fe4c090102e1a426d4617a93f8d6801b7cb0b2" gracePeriod=30 Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.871214 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.190:9292/healthcheck\": EOF" Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.874036 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.190:9292/healthcheck\": EOF" Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.877787 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.190:9292/healthcheck\": EOF" Dec 09 09:08:13 crc kubenswrapper[4786]: I1209 09:08:13.877852 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.190:9292/healthcheck\": EOF" Dec 09 09:08:14 crc kubenswrapper[4786]: I1209 09:08:14.175398 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 09:08:14 crc kubenswrapper[4786]: I1209 09:08:14.177321 4786 generic.go:334] "Generic (PLEG): container finished" podID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerID="ae02065c878b064852c05d9264c5857b2706893dea8ebd4fffa2cd399246b83e" exitCode=143 Dec 09 09:08:14 crc kubenswrapper[4786]: I1209 09:08:14.177488 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9361250-e7b7-4b9e-aad9-1889992aca31","Type":"ContainerDied","Data":"ae02065c878b064852c05d9264c5857b2706893dea8ebd4fffa2cd399246b83e"} Dec 09 09:08:14 crc kubenswrapper[4786]: I1209 09:08:14.177534 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:14 crc kubenswrapper[4786]: I1209 09:08:14.178023 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-httpd" containerID="cri-o://3c69c0c16201f127e683d70cdaaa98e30d84b6f20139e11540a366acce96209d" gracePeriod=30 Dec 09 09:08:14 crc kubenswrapper[4786]: I1209 09:08:14.177997 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-log" containerID="cri-o://a3fedd1ca0b1b0bf3e0e976efd18c8224500af8ae193d5e2c65f7f2aba749373" gracePeriod=30 Dec 09 09:08:14 crc kubenswrapper[4786]: I1209 09:08:14.185889 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 09:08:14 crc kubenswrapper[4786]: I1209 09:08:14.206288 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": EOF" Dec 09 09:08:14 crc kubenswrapper[4786]: I1209 09:08:14.206683 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.191:9292/healthcheck\": EOF" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.191978 4786 generic.go:334] "Generic (PLEG): container finished" podID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerID="3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3" exitCode=0 Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.198977 4786 generic.go:334] "Generic (PLEG): container finished" podID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerID="a3fedd1ca0b1b0bf3e0e976efd18c8224500af8ae193d5e2c65f7f2aba749373" exitCode=143 Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.200413 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.210380 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerDied","Data":"3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3"} Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.210480 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"410d5f49-52f4-46bb-bb02-6bc31937cec1","Type":"ContainerDied","Data":"a63537fd69d9cec7bff2e9b7a79afafe214d5417f7bca1f4b3d4e83c42a2ff30"} Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.210495 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb32aa4a-8085-424e-be11-ed2ee3186cb6","Type":"ContainerDied","Data":"a3fedd1ca0b1b0bf3e0e976efd18c8224500af8ae193d5e2c65f7f2aba749373"} Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.210523 4786 scope.go:117] "RemoveContainer" containerID="d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.268239 4786 scope.go:117] "RemoveContainer" containerID="cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.315762 4786 scope.go:117] "RemoveContainer" containerID="5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.337262 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-config-data\") pod \"410d5f49-52f4-46bb-bb02-6bc31937cec1\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.337323 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-sg-core-conf-yaml\") pod \"410d5f49-52f4-46bb-bb02-6bc31937cec1\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.337584 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-scripts\") pod \"410d5f49-52f4-46bb-bb02-6bc31937cec1\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.338393 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-combined-ca-bundle\") pod \"410d5f49-52f4-46bb-bb02-6bc31937cec1\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.338681 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dp8nd\" (UniqueName: \"kubernetes.io/projected/410d5f49-52f4-46bb-bb02-6bc31937cec1-kube-api-access-dp8nd\") pod \"410d5f49-52f4-46bb-bb02-6bc31937cec1\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.338725 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-run-httpd\") pod \"410d5f49-52f4-46bb-bb02-6bc31937cec1\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.338771 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-log-httpd\") pod \"410d5f49-52f4-46bb-bb02-6bc31937cec1\" (UID: \"410d5f49-52f4-46bb-bb02-6bc31937cec1\") " Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.342573 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "410d5f49-52f4-46bb-bb02-6bc31937cec1" (UID: "410d5f49-52f4-46bb-bb02-6bc31937cec1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.346069 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-scripts" (OuterVolumeSpecName: "scripts") pod "410d5f49-52f4-46bb-bb02-6bc31937cec1" (UID: "410d5f49-52f4-46bb-bb02-6bc31937cec1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.358645 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "410d5f49-52f4-46bb-bb02-6bc31937cec1" (UID: "410d5f49-52f4-46bb-bb02-6bc31937cec1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.360372 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410d5f49-52f4-46bb-bb02-6bc31937cec1-kube-api-access-dp8nd" (OuterVolumeSpecName: "kube-api-access-dp8nd") pod "410d5f49-52f4-46bb-bb02-6bc31937cec1" (UID: "410d5f49-52f4-46bb-bb02-6bc31937cec1"). InnerVolumeSpecName "kube-api-access-dp8nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.386921 4786 scope.go:117] "RemoveContainer" containerID="3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.407625 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "410d5f49-52f4-46bb-bb02-6bc31937cec1" (UID: "410d5f49-52f4-46bb-bb02-6bc31937cec1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.444889 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dp8nd\" (UniqueName: \"kubernetes.io/projected/410d5f49-52f4-46bb-bb02-6bc31937cec1-kube-api-access-dp8nd\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.444961 4786 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.444979 4786 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/410d5f49-52f4-46bb-bb02-6bc31937cec1-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.444991 4786 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.445003 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.460343 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "410d5f49-52f4-46bb-bb02-6bc31937cec1" (UID: "410d5f49-52f4-46bb-bb02-6bc31937cec1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.538606 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-config-data" (OuterVolumeSpecName: "config-data") pod "410d5f49-52f4-46bb-bb02-6bc31937cec1" (UID: "410d5f49-52f4-46bb-bb02-6bc31937cec1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.548960 4786 scope.go:117] "RemoveContainer" containerID="d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141" Dec 09 09:08:15 crc kubenswrapper[4786]: E1209 09:08:15.552552 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141\": container with ID starting with d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141 not found: ID does not exist" containerID="d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.552601 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141"} err="failed to get container status \"d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141\": rpc error: code = NotFound desc = could not find container \"d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141\": container with ID starting with d07373c6fe7bc825ae8226dd1ed64addc005e1ea941b84d8bca2d19fd03e5141 not found: ID does not exist" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.552654 4786 scope.go:117] "RemoveContainer" containerID="cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3" Dec 09 09:08:15 crc kubenswrapper[4786]: E1209 09:08:15.553323 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3\": container with ID starting with cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3 not found: ID does not exist" containerID="cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.553400 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3"} err="failed to get container status \"cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3\": rpc error: code = NotFound desc = could not find container \"cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3\": container with ID starting with cc98d258b243597012656adf9b950bd202a82020b27ce23387984bc0723b23f3 not found: ID does not exist" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.553496 4786 scope.go:117] "RemoveContainer" containerID="5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.553943 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.553998 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/410d5f49-52f4-46bb-bb02-6bc31937cec1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:15 crc kubenswrapper[4786]: E1209 09:08:15.554003 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc\": container with ID starting with 5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc not found: ID does not exist" containerID="5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.554102 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc"} err="failed to get container status \"5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc\": rpc error: code = NotFound desc = could not find container \"5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc\": container with ID starting with 5920aa528629e20e94e051d1110a98e6c7d4185576edd81ffd4d17d11ea139bc not found: ID does not exist" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.554137 4786 scope.go:117] "RemoveContainer" containerID="3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3" Dec 09 09:08:15 crc kubenswrapper[4786]: E1209 09:08:15.554463 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3\": container with ID starting with 3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3 not found: ID does not exist" containerID="3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3" Dec 09 09:08:15 crc kubenswrapper[4786]: I1209 09:08:15.554512 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3"} err="failed to get container status \"3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3\": rpc error: code = NotFound desc = could not find container \"3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3\": container with ID starting with 3b8a46bf76888f905d5d89183c166aff7d6879f1b4a360e88c0131c7b0dfd8a3 not found: ID does not exist" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.212946 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.216634 4786 generic.go:334] "Generic (PLEG): container finished" podID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerID="df0c4601d0c901fae16c037643fe4c090102e1a426d4617a93f8d6801b7cb0b2" exitCode=0 Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.216847 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9361250-e7b7-4b9e-aad9-1889992aca31","Type":"ContainerDied","Data":"df0c4601d0c901fae16c037643fe4c090102e1a426d4617a93f8d6801b7cb0b2"} Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.266475 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.292907 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.310777 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:16 crc kubenswrapper[4786]: E1209 09:08:16.311333 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="ceilometer-central-agent" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.311350 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="ceilometer-central-agent" Dec 09 09:08:16 crc kubenswrapper[4786]: E1209 09:08:16.311390 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="sg-core" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.311398 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="sg-core" Dec 09 09:08:16 crc kubenswrapper[4786]: E1209 09:08:16.311412 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="proxy-httpd" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.311418 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="proxy-httpd" Dec 09 09:08:16 crc kubenswrapper[4786]: E1209 09:08:16.311453 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="ceilometer-notification-agent" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.311459 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="ceilometer-notification-agent" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.311688 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="ceilometer-notification-agent" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.311713 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="sg-core" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.311727 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="proxy-httpd" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.311752 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" containerName="ceilometer-central-agent" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.313906 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.319201 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.319529 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.345634 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.490901 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.491010 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-config-data\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.491052 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-log-httpd\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.491084 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.491104 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktr4d\" (UniqueName: \"kubernetes.io/projected/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-kube-api-access-ktr4d\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.491160 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-scripts\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.491228 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-run-httpd\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.594674 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.594739 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktr4d\" (UniqueName: \"kubernetes.io/projected/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-kube-api-access-ktr4d\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.594802 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-scripts\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.594912 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-run-httpd\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.594994 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.595049 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-log-httpd\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.595067 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-config-data\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.596365 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-run-httpd\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.596823 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-log-httpd\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.601049 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-config-data\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.604702 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-scripts\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.606115 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.613716 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.623165 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktr4d\" (UniqueName: \"kubernetes.io/projected/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-kube-api-access-ktr4d\") pod \"ceilometer-0\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.641884 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.778456 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.901100 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-httpd-run\") pod \"b9361250-e7b7-4b9e-aad9-1889992aca31\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.901214 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-config-data\") pod \"b9361250-e7b7-4b9e-aad9-1889992aca31\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.901321 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-logs\") pod \"b9361250-e7b7-4b9e-aad9-1889992aca31\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.903035 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b9361250-e7b7-4b9e-aad9-1889992aca31" (UID: "b9361250-e7b7-4b9e-aad9-1889992aca31"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.903458 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-logs" (OuterVolumeSpecName: "logs") pod "b9361250-e7b7-4b9e-aad9-1889992aca31" (UID: "b9361250-e7b7-4b9e-aad9-1889992aca31"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.904934 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjcsh\" (UniqueName: \"kubernetes.io/projected/b9361250-e7b7-4b9e-aad9-1889992aca31-kube-api-access-mjcsh\") pod \"b9361250-e7b7-4b9e-aad9-1889992aca31\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.905014 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"b9361250-e7b7-4b9e-aad9-1889992aca31\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.905056 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-internal-tls-certs\") pod \"b9361250-e7b7-4b9e-aad9-1889992aca31\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.905079 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-scripts\") pod \"b9361250-e7b7-4b9e-aad9-1889992aca31\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.905157 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-combined-ca-bundle\") pod \"b9361250-e7b7-4b9e-aad9-1889992aca31\" (UID: \"b9361250-e7b7-4b9e-aad9-1889992aca31\") " Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.905868 4786 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.905893 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9361250-e7b7-4b9e-aad9-1889992aca31-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.912813 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-scripts" (OuterVolumeSpecName: "scripts") pod "b9361250-e7b7-4b9e-aad9-1889992aca31" (UID: "b9361250-e7b7-4b9e-aad9-1889992aca31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.913052 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9361250-e7b7-4b9e-aad9-1889992aca31-kube-api-access-mjcsh" (OuterVolumeSpecName: "kube-api-access-mjcsh") pod "b9361250-e7b7-4b9e-aad9-1889992aca31" (UID: "b9361250-e7b7-4b9e-aad9-1889992aca31"). InnerVolumeSpecName "kube-api-access-mjcsh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:16 crc kubenswrapper[4786]: I1209 09:08:16.947747 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance") pod "b9361250-e7b7-4b9e-aad9-1889992aca31" (UID: "b9361250-e7b7-4b9e-aad9-1889992aca31"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.008683 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjcsh\" (UniqueName: \"kubernetes.io/projected/b9361250-e7b7-4b9e-aad9-1889992aca31-kube-api-access-mjcsh\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.009189 4786 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.009459 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.021297 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9361250-e7b7-4b9e-aad9-1889992aca31" (UID: "b9361250-e7b7-4b9e-aad9-1889992aca31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.025069 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "b9361250-e7b7-4b9e-aad9-1889992aca31" (UID: "b9361250-e7b7-4b9e-aad9-1889992aca31"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.084394 4786 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.084617 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-config-data" (OuterVolumeSpecName: "config-data") pod "b9361250-e7b7-4b9e-aad9-1889992aca31" (UID: "b9361250-e7b7-4b9e-aad9-1889992aca31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.116118 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.116165 4786 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.116179 4786 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.116194 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9361250-e7b7-4b9e-aad9-1889992aca31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.209568 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="410d5f49-52f4-46bb-bb02-6bc31937cec1" path="/var/lib/kubelet/pods/410d5f49-52f4-46bb-bb02-6bc31937cec1/volumes" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.258989 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"b9361250-e7b7-4b9e-aad9-1889992aca31","Type":"ContainerDied","Data":"41890a48e404839c75bf0e1b3f8d8f0b1b203f4ee12a8ff374477748bc573f7b"} Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.259110 4786 scope.go:117] "RemoveContainer" containerID="df0c4601d0c901fae16c037643fe4c090102e1a426d4617a93f8d6801b7cb0b2" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.259378 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.307844 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.330954 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.342746 4786 scope.go:117] "RemoveContainer" containerID="ae02065c878b064852c05d9264c5857b2706893dea8ebd4fffa2cd399246b83e" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.358501 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:08:17 crc kubenswrapper[4786]: W1209 09:08:17.361657 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8cb19cf_7a6e_4af6_9066_296ed58a7e1c.slice/crio-b355b6181ff30369c3cddac90e3f0ab4962c3ed3254f9c56bd6480b7d933bdca WatchSource:0}: Error finding container b355b6181ff30369c3cddac90e3f0ab4962c3ed3254f9c56bd6480b7d933bdca: Status 404 returned error can't find the container with id b355b6181ff30369c3cddac90e3f0ab4962c3ed3254f9c56bd6480b7d933bdca Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.374500 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:08:17 crc kubenswrapper[4786]: E1209 09:08:17.375286 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-httpd" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.375316 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-httpd" Dec 09 09:08:17 crc kubenswrapper[4786]: E1209 09:08:17.375336 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-log" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.375342 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-log" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.375657 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-httpd" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.375681 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" containerName="glance-log" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.377388 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.387605 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.388666 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.420490 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.431113 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.431179 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld4ft\" (UniqueName: \"kubernetes.io/projected/c7759b2f-fa10-4c57-845a-773289198d2e-kube-api-access-ld4ft\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.431341 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7759b2f-fa10-4c57-845a-773289198d2e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.431452 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.431497 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7759b2f-fa10-4c57-845a-773289198d2e-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.431563 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.431639 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.431677 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.534982 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.535090 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7759b2f-fa10-4c57-845a-773289198d2e-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.535659 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7759b2f-fa10-4c57-845a-773289198d2e-logs\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.535768 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.535851 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.535883 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.535938 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.535956 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld4ft\" (UniqueName: \"kubernetes.io/projected/c7759b2f-fa10-4c57-845a-773289198d2e-kube-api-access-ld4ft\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.536136 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7759b2f-fa10-4c57-845a-773289198d2e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.536459 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c7759b2f-fa10-4c57-845a-773289198d2e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.536732 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.539796 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.546682 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.548810 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.556537 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7759b2f-fa10-4c57-845a-773289198d2e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.567141 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld4ft\" (UniqueName: \"kubernetes.io/projected/c7759b2f-fa10-4c57-845a-773289198d2e-kube-api-access-ld4ft\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.603410 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"c7759b2f-fa10-4c57-845a-773289198d2e\") " pod="openstack/glance-default-internal-api-0" Dec 09 09:08:17 crc kubenswrapper[4786]: I1209 09:08:17.746467 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:18 crc kubenswrapper[4786]: I1209 09:08:18.221413 4786 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","pode3e17544-6307-4a46-b381-34744a99cbb5"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort pode3e17544-6307-4a46-b381-34744a99cbb5] : Timed out while waiting for systemd to remove kubepods-besteffort-pode3e17544_6307_4a46_b381_34744a99cbb5.slice" Dec 09 09:08:18 crc kubenswrapper[4786]: I1209 09:08:18.294318 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerStarted","Data":"9ed159b69fc88b7f1cff985ea110a6e5c6609918c273987d1f171c8fb2a31af0"} Dec 09 09:08:18 crc kubenswrapper[4786]: I1209 09:08:18.294404 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerStarted","Data":"b355b6181ff30369c3cddac90e3f0ab4962c3ed3254f9c56bd6480b7d933bdca"} Dec 09 09:08:18 crc kubenswrapper[4786]: I1209 09:08:18.406911 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:08:19 crc kubenswrapper[4786]: I1209 09:08:19.205052 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9361250-e7b7-4b9e-aad9-1889992aca31" path="/var/lib/kubelet/pods/b9361250-e7b7-4b9e-aad9-1889992aca31/volumes" Dec 09 09:08:19 crc kubenswrapper[4786]: I1209 09:08:19.334541 4786 generic.go:334] "Generic (PLEG): container finished" podID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerID="3c69c0c16201f127e683d70cdaaa98e30d84b6f20139e11540a366acce96209d" exitCode=0 Dec 09 09:08:19 crc kubenswrapper[4786]: I1209 09:08:19.334609 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb32aa4a-8085-424e-be11-ed2ee3186cb6","Type":"ContainerDied","Data":"3c69c0c16201f127e683d70cdaaa98e30d84b6f20139e11540a366acce96209d"} Dec 09 09:08:24 crc kubenswrapper[4786]: I1209 09:08:24.193022 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5cb696f46f-55kzl" Dec 09 09:08:24 crc kubenswrapper[4786]: I1209 09:08:24.275976 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6999f7dcbd-6rspb"] Dec 09 09:08:24 crc kubenswrapper[4786]: I1209 09:08:24.276309 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6999f7dcbd-6rspb" podUID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerName="neutron-api" containerID="cri-o://731f594f63c89d6272835af9db187e38dbfe6acb7626292e604c93812eed4fcf" gracePeriod=30 Dec 09 09:08:24 crc kubenswrapper[4786]: I1209 09:08:24.276501 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6999f7dcbd-6rspb" podUID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerName="neutron-httpd" containerID="cri-o://13c73ec9d7d33cf8aa6bb779ab661219e2eb8f59dd20c1917642a0c98ae7058b" gracePeriod=30 Dec 09 09:08:24 crc kubenswrapper[4786]: I1209 09:08:24.989234 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:08:24 crc kubenswrapper[4786]: I1209 09:08:24.989811 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:08:24 crc kubenswrapper[4786]: I1209 09:08:24.989890 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:08:24 crc kubenswrapper[4786]: I1209 09:08:24.991193 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6861e8a37988a9e19bbc4cef34d4d5e8b2d44819ea8091141fe025d3c9cd2383"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:08:24 crc kubenswrapper[4786]: I1209 09:08:24.991291 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://6861e8a37988a9e19bbc4cef34d4d5e8b2d44819ea8091141fe025d3c9cd2383" gracePeriod=600 Dec 09 09:08:25 crc kubenswrapper[4786]: I1209 09:08:25.440873 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="6861e8a37988a9e19bbc4cef34d4d5e8b2d44819ea8091141fe025d3c9cd2383" exitCode=0 Dec 09 09:08:25 crc kubenswrapper[4786]: I1209 09:08:25.440956 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"6861e8a37988a9e19bbc4cef34d4d5e8b2d44819ea8091141fe025d3c9cd2383"} Dec 09 09:08:25 crc kubenswrapper[4786]: I1209 09:08:25.441087 4786 scope.go:117] "RemoveContainer" containerID="4cca8363f81421e3823d923dacff94c757cc32e0b6c3dae834013b3d2653acca" Dec 09 09:08:25 crc kubenswrapper[4786]: I1209 09:08:25.454521 4786 generic.go:334] "Generic (PLEG): container finished" podID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerID="13c73ec9d7d33cf8aa6bb779ab661219e2eb8f59dd20c1917642a0c98ae7058b" exitCode=0 Dec 09 09:08:25 crc kubenswrapper[4786]: I1209 09:08:25.454596 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6999f7dcbd-6rspb" event={"ID":"a1bf94f1-1c69-4999-bd4f-202175e2df7c","Type":"ContainerDied","Data":"13c73ec9d7d33cf8aa6bb779ab661219e2eb8f59dd20c1917642a0c98ae7058b"} Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.272137 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.360940 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.361041 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-combined-ca-bundle\") pod \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.361141 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfskf\" (UniqueName: \"kubernetes.io/projected/eb32aa4a-8085-424e-be11-ed2ee3186cb6-kube-api-access-dfskf\") pod \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.361195 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-public-tls-certs\") pod \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.361262 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-logs\") pod \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.361305 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-scripts\") pod \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.361433 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-config-data\") pod \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.361456 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-httpd-run\") pod \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\" (UID: \"eb32aa4a-8085-424e-be11-ed2ee3186cb6\") " Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.362568 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "eb32aa4a-8085-424e-be11-ed2ee3186cb6" (UID: "eb32aa4a-8085-424e-be11-ed2ee3186cb6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.364036 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-logs" (OuterVolumeSpecName: "logs") pod "eb32aa4a-8085-424e-be11-ed2ee3186cb6" (UID: "eb32aa4a-8085-424e-be11-ed2ee3186cb6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.371457 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance") pod "eb32aa4a-8085-424e-be11-ed2ee3186cb6" (UID: "eb32aa4a-8085-424e-be11-ed2ee3186cb6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.376641 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-scripts" (OuterVolumeSpecName: "scripts") pod "eb32aa4a-8085-424e-be11-ed2ee3186cb6" (UID: "eb32aa4a-8085-424e-be11-ed2ee3186cb6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.389864 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb32aa4a-8085-424e-be11-ed2ee3186cb6-kube-api-access-dfskf" (OuterVolumeSpecName: "kube-api-access-dfskf") pod "eb32aa4a-8085-424e-be11-ed2ee3186cb6" (UID: "eb32aa4a-8085-424e-be11-ed2ee3186cb6"). InnerVolumeSpecName "kube-api-access-dfskf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.459034 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb32aa4a-8085-424e-be11-ed2ee3186cb6" (UID: "eb32aa4a-8085-424e-be11-ed2ee3186cb6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.474912 4786 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.475569 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-config-data" (OuterVolumeSpecName: "config-data") pod "eb32aa4a-8085-424e-be11-ed2ee3186cb6" (UID: "eb32aa4a-8085-424e-be11-ed2ee3186cb6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.479367 4786 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.479463 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.479483 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfskf\" (UniqueName: \"kubernetes.io/projected/eb32aa4a-8085-424e-be11-ed2ee3186cb6-kube-api-access-dfskf\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.479498 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/eb32aa4a-8085-424e-be11-ed2ee3186cb6-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.479510 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.516018 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "eb32aa4a-8085-424e-be11-ed2ee3186cb6" (UID: "eb32aa4a-8085-424e-be11-ed2ee3186cb6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.536142 4786 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.536577 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-q4p7n" event={"ID":"f5beb8b6-6d1a-45cd-94cf-81754c4db040","Type":"ContainerStarted","Data":"d0e5af582f2527fa44cbc360beb6e1ed261a97e02adde2d5bb32a39990a0d1af"} Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.551130 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"eb32aa4a-8085-424e-be11-ed2ee3186cb6","Type":"ContainerDied","Data":"17421eaf0abc47aed07f0d15ebcf95151af41a40b5c122c9266507dbf5e04e42"} Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.551232 4786 scope.go:117] "RemoveContainer" containerID="3c69c0c16201f127e683d70cdaaa98e30d84b6f20139e11540a366acce96209d" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.551651 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.559097 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.565871 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197"} Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.576990 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerStarted","Data":"b59821dc7253de6fc73974d2d70e1f7def808ce609e0d6f6e2db48bc977c73b1"} Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.594373 4786 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.594406 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb32aa4a-8085-424e-be11-ed2ee3186cb6-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.594440 4786 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.602471 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-q4p7n" podStartSLOduration=2.071438983 podStartE2EDuration="16.602437873s" podCreationTimestamp="2025-12-09 09:08:10 +0000 UTC" firstStartedPulling="2025-12-09 09:08:11.24517467 +0000 UTC m=+1457.128795896" lastFinishedPulling="2025-12-09 09:08:25.77617356 +0000 UTC m=+1471.659794786" observedRunningTime="2025-12-09 09:08:26.557301652 +0000 UTC m=+1472.440922878" watchObservedRunningTime="2025-12-09 09:08:26.602437873 +0000 UTC m=+1472.486059099" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.678458 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.691838 4786 scope.go:117] "RemoveContainer" containerID="a3fedd1ca0b1b0bf3e0e976efd18c8224500af8ae193d5e2c65f7f2aba749373" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.706177 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.722854 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:08:26 crc kubenswrapper[4786]: E1209 09:08:26.724193 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-httpd" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.724288 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-httpd" Dec 09 09:08:26 crc kubenswrapper[4786]: E1209 09:08:26.724321 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-log" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.724330 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-log" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.724676 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-httpd" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.724703 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" containerName="glance-log" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.727710 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.733585 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.733584 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.765719 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.903956 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.904616 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.904721 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.905026 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.905071 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99gtt\" (UniqueName: \"kubernetes.io/projected/c137f18e-fc1e-42ac-a96b-c990c55664f7-kube-api-access-99gtt\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.905211 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c137f18e-fc1e-42ac-a96b-c990c55664f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.905259 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:26 crc kubenswrapper[4786]: I1209 09:08:26.905325 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c137f18e-fc1e-42ac-a96b-c990c55664f7-logs\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.008501 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.008573 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.008618 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.008651 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99gtt\" (UniqueName: \"kubernetes.io/projected/c137f18e-fc1e-42ac-a96b-c990c55664f7-kube-api-access-99gtt\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.008811 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c137f18e-fc1e-42ac-a96b-c990c55664f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.008869 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.008977 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c137f18e-fc1e-42ac-a96b-c990c55664f7-logs\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.009077 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.011185 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c137f18e-fc1e-42ac-a96b-c990c55664f7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.011494 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c137f18e-fc1e-42ac-a96b-c990c55664f7-logs\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.011550 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.015539 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-scripts\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.019799 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-config-data\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.031646 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.034309 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99gtt\" (UniqueName: \"kubernetes.io/projected/c137f18e-fc1e-42ac-a96b-c990c55664f7-kube-api-access-99gtt\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.045095 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.048195 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c137f18e-fc1e-42ac-a96b-c990c55664f7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c137f18e-fc1e-42ac-a96b-c990c55664f7\") " pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.128008 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.204663 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb32aa4a-8085-424e-be11-ed2ee3186cb6" path="/var/lib/kubelet/pods/eb32aa4a-8085-424e-be11-ed2ee3186cb6/volumes" Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.589173 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7759b2f-fa10-4c57-845a-773289198d2e","Type":"ContainerStarted","Data":"114451d30d855d405e778deefbb1d8115decebd1da59d98ad381dbf046d95b26"} Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.589943 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7759b2f-fa10-4c57-845a-773289198d2e","Type":"ContainerStarted","Data":"7f767643213aa4d499d78f1e6d3b5820b6b89dc40ba9985736f218c058232aa2"} Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.597941 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerStarted","Data":"0af9fb7f4861e81c08836c1182d09f580658438fbfe713da04456fd6dc223978"} Dec 09 09:08:27 crc kubenswrapper[4786]: I1209 09:08:27.813702 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 09 09:08:28 crc kubenswrapper[4786]: I1209 09:08:28.629989 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c7759b2f-fa10-4c57-845a-773289198d2e","Type":"ContainerStarted","Data":"1426572c31e797e2c86bfe7eeffb0c0f30d08fa206938e74142e20e65f228d8b"} Dec 09 09:08:28 crc kubenswrapper[4786]: I1209 09:08:28.652296 4786 generic.go:334] "Generic (PLEG): container finished" podID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerID="731f594f63c89d6272835af9db187e38dbfe6acb7626292e604c93812eed4fcf" exitCode=0 Dec 09 09:08:28 crc kubenswrapper[4786]: I1209 09:08:28.652493 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6999f7dcbd-6rspb" event={"ID":"a1bf94f1-1c69-4999-bd4f-202175e2df7c","Type":"ContainerDied","Data":"731f594f63c89d6272835af9db187e38dbfe6acb7626292e604c93812eed4fcf"} Dec 09 09:08:28 crc kubenswrapper[4786]: I1209 09:08:28.670267 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.670240542 podStartE2EDuration="11.670240542s" podCreationTimestamp="2025-12-09 09:08:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:08:28.663220407 +0000 UTC m=+1474.546841633" watchObservedRunningTime="2025-12-09 09:08:28.670240542 +0000 UTC m=+1474.553861768" Dec 09 09:08:28 crc kubenswrapper[4786]: I1209 09:08:28.685791 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerStarted","Data":"6983231d2f312d0bd8fa9d560b5b2f4ac17c19e0798a3ba4128861853cd6d3a6"} Dec 09 09:08:28 crc kubenswrapper[4786]: I1209 09:08:28.685997 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 09:08:28 crc kubenswrapper[4786]: I1209 09:08:28.690788 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c137f18e-fc1e-42ac-a96b-c990c55664f7","Type":"ContainerStarted","Data":"04fa0f84664286570f25569e1196e14802c293bbcc6a08e1e202d987612b356c"} Dec 09 09:08:28 crc kubenswrapper[4786]: I1209 09:08:28.728367 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8226178499999999 podStartE2EDuration="12.728343235s" podCreationTimestamp="2025-12-09 09:08:16 +0000 UTC" firstStartedPulling="2025-12-09 09:08:17.364947806 +0000 UTC m=+1463.248569032" lastFinishedPulling="2025-12-09 09:08:28.270673191 +0000 UTC m=+1474.154294417" observedRunningTime="2025-12-09 09:08:28.711786904 +0000 UTC m=+1474.595408140" watchObservedRunningTime="2025-12-09 09:08:28.728343235 +0000 UTC m=+1474.611964461" Dec 09 09:08:28 crc kubenswrapper[4786]: I1209 09:08:28.967266 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.078666 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-combined-ca-bundle\") pod \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.078810 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-ovndb-tls-certs\") pod \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.078894 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nzww\" (UniqueName: \"kubernetes.io/projected/a1bf94f1-1c69-4999-bd4f-202175e2df7c-kube-api-access-9nzww\") pod \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.078948 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-httpd-config\") pod \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.079000 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-config\") pod \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\" (UID: \"a1bf94f1-1c69-4999-bd4f-202175e2df7c\") " Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.105953 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1bf94f1-1c69-4999-bd4f-202175e2df7c-kube-api-access-9nzww" (OuterVolumeSpecName: "kube-api-access-9nzww") pod "a1bf94f1-1c69-4999-bd4f-202175e2df7c" (UID: "a1bf94f1-1c69-4999-bd4f-202175e2df7c"). InnerVolumeSpecName "kube-api-access-9nzww". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.106107 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "a1bf94f1-1c69-4999-bd4f-202175e2df7c" (UID: "a1bf94f1-1c69-4999-bd4f-202175e2df7c"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.169330 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a1bf94f1-1c69-4999-bd4f-202175e2df7c" (UID: "a1bf94f1-1c69-4999-bd4f-202175e2df7c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.181775 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.181820 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nzww\" (UniqueName: \"kubernetes.io/projected/a1bf94f1-1c69-4999-bd4f-202175e2df7c-kube-api-access-9nzww\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.181835 4786 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.199589 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-config" (OuterVolumeSpecName: "config") pod "a1bf94f1-1c69-4999-bd4f-202175e2df7c" (UID: "a1bf94f1-1c69-4999-bd4f-202175e2df7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.210634 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "a1bf94f1-1c69-4999-bd4f-202175e2df7c" (UID: "a1bf94f1-1c69-4999-bd4f-202175e2df7c"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.286414 4786 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.286954 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/a1bf94f1-1c69-4999-bd4f-202175e2df7c-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.707218 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6999f7dcbd-6rspb" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.707201 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6999f7dcbd-6rspb" event={"ID":"a1bf94f1-1c69-4999-bd4f-202175e2df7c","Type":"ContainerDied","Data":"26706c9750c31b72d5e33ecbc4c034eefab9794c34ae915f8a1144b7bf3783eb"} Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.707400 4786 scope.go:117] "RemoveContainer" containerID="13c73ec9d7d33cf8aa6bb779ab661219e2eb8f59dd20c1917642a0c98ae7058b" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.709380 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c137f18e-fc1e-42ac-a96b-c990c55664f7","Type":"ContainerStarted","Data":"6cdadd36eb0dac2aa9082601fa7954b6fa5fc51c41a65a09fb1248473fc42277"} Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.709526 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c137f18e-fc1e-42ac-a96b-c990c55664f7","Type":"ContainerStarted","Data":"20a754fc921e92ee0586a9ca7be3f59b9aa9af5a1b6f420eb209c96b96d7c6cb"} Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.728165 4786 scope.go:117] "RemoveContainer" containerID="731f594f63c89d6272835af9db187e38dbfe6acb7626292e604c93812eed4fcf" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.743365 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.743323699 podStartE2EDuration="3.743323699s" podCreationTimestamp="2025-12-09 09:08:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:08:29.728985453 +0000 UTC m=+1475.612606679" watchObservedRunningTime="2025-12-09 09:08:29.743323699 +0000 UTC m=+1475.626944925" Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.762886 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6999f7dcbd-6rspb"] Dec 09 09:08:29 crc kubenswrapper[4786]: I1209 09:08:29.775259 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6999f7dcbd-6rspb"] Dec 09 09:08:31 crc kubenswrapper[4786]: I1209 09:08:31.202128 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" path="/var/lib/kubelet/pods/a1bf94f1-1c69-4999-bd4f-202175e2df7c/volumes" Dec 09 09:08:36 crc kubenswrapper[4786]: I1209 09:08:36.392075 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 09 09:08:36 crc kubenswrapper[4786]: I1209 09:08:36.393227 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" containerID="cri-o://72bf17d8a5e951a264fb49cd4989b4c17935f80254bfac85b2b6f5cfd404a42c" gracePeriod=30 Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.130589 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.130648 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.170603 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.173873 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.747640 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.747986 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.787404 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.797527 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.828248 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.828548 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.828766 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 09:08:37 crc kubenswrapper[4786]: I1209 09:08:37.828849 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.438217 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.438650 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="ceilometer-central-agent" containerID="cri-o://9ed159b69fc88b7f1cff985ea110a6e5c6609918c273987d1f171c8fb2a31af0" gracePeriod=30 Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.438833 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="proxy-httpd" containerID="cri-o://6983231d2f312d0bd8fa9d560b5b2f4ac17c19e0798a3ba4128861853cd6d3a6" gracePeriod=30 Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.438936 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="ceilometer-notification-agent" containerID="cri-o://b59821dc7253de6fc73974d2d70e1f7def808ce609e0d6f6e2db48bc977c73b1" gracePeriod=30 Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.439147 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="sg-core" containerID="cri-o://0af9fb7f4861e81c08836c1182d09f580658438fbfe713da04456fd6dc223978" gracePeriod=30 Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.451110 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.197:3000/\": EOF" Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.858334 4786 generic.go:334] "Generic (PLEG): container finished" podID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerID="6983231d2f312d0bd8fa9d560b5b2f4ac17c19e0798a3ba4128861853cd6d3a6" exitCode=0 Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.858939 4786 generic.go:334] "Generic (PLEG): container finished" podID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerID="0af9fb7f4861e81c08836c1182d09f580658438fbfe713da04456fd6dc223978" exitCode=2 Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.858522 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerDied","Data":"6983231d2f312d0bd8fa9d560b5b2f4ac17c19e0798a3ba4128861853cd6d3a6"} Dec 09 09:08:38 crc kubenswrapper[4786]: I1209 09:08:38.859738 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerDied","Data":"0af9fb7f4861e81c08836c1182d09f580658438fbfe713da04456fd6dc223978"} Dec 09 09:08:39 crc kubenswrapper[4786]: I1209 09:08:39.877941 4786 generic.go:334] "Generic (PLEG): container finished" podID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerID="9ed159b69fc88b7f1cff985ea110a6e5c6609918c273987d1f171c8fb2a31af0" exitCode=0 Dec 09 09:08:39 crc kubenswrapper[4786]: I1209 09:08:39.878045 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerDied","Data":"9ed159b69fc88b7f1cff985ea110a6e5c6609918c273987d1f171c8fb2a31af0"} Dec 09 09:08:39 crc kubenswrapper[4786]: I1209 09:08:39.878722 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:39 crc kubenswrapper[4786]: I1209 09:08:39.878740 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:39 crc kubenswrapper[4786]: I1209 09:08:39.878734 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:39 crc kubenswrapper[4786]: I1209 09:08:39.878764 4786 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 09 09:08:39 crc kubenswrapper[4786]: I1209 09:08:39.939517 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 09:08:39 crc kubenswrapper[4786]: I1209 09:08:39.996147 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 09 09:08:40 crc kubenswrapper[4786]: I1209 09:08:40.242125 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:40 crc kubenswrapper[4786]: I1209 09:08:40.243058 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 09 09:08:42 crc kubenswrapper[4786]: I1209 09:08:42.925695 4786 generic.go:334] "Generic (PLEG): container finished" podID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerID="b59821dc7253de6fc73974d2d70e1f7def808ce609e0d6f6e2db48bc977c73b1" exitCode=0 Dec 09 09:08:42 crc kubenswrapper[4786]: I1209 09:08:42.925755 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerDied","Data":"b59821dc7253de6fc73974d2d70e1f7def808ce609e0d6f6e2db48bc977c73b1"} Dec 09 09:08:42 crc kubenswrapper[4786]: I1209 09:08:42.936685 4786 generic.go:334] "Generic (PLEG): container finished" podID="8d930edc-ed97-418e-a47a-60f38b734a50" containerID="72bf17d8a5e951a264fb49cd4989b4c17935f80254bfac85b2b6f5cfd404a42c" exitCode=0 Dec 09 09:08:42 crc kubenswrapper[4786]: I1209 09:08:42.936753 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8d930edc-ed97-418e-a47a-60f38b734a50","Type":"ContainerDied","Data":"72bf17d8a5e951a264fb49cd4989b4c17935f80254bfac85b2b6f5cfd404a42c"} Dec 09 09:08:42 crc kubenswrapper[4786]: I1209 09:08:42.936821 4786 scope.go:117] "RemoveContainer" containerID="a560b8ce4c6949793bd95b3c67bc632fa0c176608dc24c790c22d5ee77043340" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.075488 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.206493 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-custom-prometheus-ca\") pod \"8d930edc-ed97-418e-a47a-60f38b734a50\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.206661 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d930edc-ed97-418e-a47a-60f38b734a50-logs\") pod \"8d930edc-ed97-418e-a47a-60f38b734a50\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.206955 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-combined-ca-bundle\") pod \"8d930edc-ed97-418e-a47a-60f38b734a50\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.207065 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-config-data\") pod \"8d930edc-ed97-418e-a47a-60f38b734a50\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.207298 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdrvl\" (UniqueName: \"kubernetes.io/projected/8d930edc-ed97-418e-a47a-60f38b734a50-kube-api-access-pdrvl\") pod \"8d930edc-ed97-418e-a47a-60f38b734a50\" (UID: \"8d930edc-ed97-418e-a47a-60f38b734a50\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.209330 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d930edc-ed97-418e-a47a-60f38b734a50-logs" (OuterVolumeSpecName: "logs") pod "8d930edc-ed97-418e-a47a-60f38b734a50" (UID: "8d930edc-ed97-418e-a47a-60f38b734a50"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.217089 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.217959 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d930edc-ed97-418e-a47a-60f38b734a50-kube-api-access-pdrvl" (OuterVolumeSpecName: "kube-api-access-pdrvl") pod "8d930edc-ed97-418e-a47a-60f38b734a50" (UID: "8d930edc-ed97-418e-a47a-60f38b734a50"). InnerVolumeSpecName "kube-api-access-pdrvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.251516 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "8d930edc-ed97-418e-a47a-60f38b734a50" (UID: "8d930edc-ed97-418e-a47a-60f38b734a50"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.297719 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-config-data" (OuterVolumeSpecName: "config-data") pod "8d930edc-ed97-418e-a47a-60f38b734a50" (UID: "8d930edc-ed97-418e-a47a-60f38b734a50"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.304648 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d930edc-ed97-418e-a47a-60f38b734a50" (UID: "8d930edc-ed97-418e-a47a-60f38b734a50"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.312647 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.312709 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.312723 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdrvl\" (UniqueName: \"kubernetes.io/projected/8d930edc-ed97-418e-a47a-60f38b734a50-kube-api-access-pdrvl\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.312739 4786 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/8d930edc-ed97-418e-a47a-60f38b734a50-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.312752 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8d930edc-ed97-418e-a47a-60f38b734a50-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.413517 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-log-httpd\") pod \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.413576 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-config-data\") pod \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.413669 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-scripts\") pod \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.413830 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-sg-core-conf-yaml\") pod \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.413971 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktr4d\" (UniqueName: \"kubernetes.io/projected/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-kube-api-access-ktr4d\") pod \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.414003 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-combined-ca-bundle\") pod \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.414100 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-run-httpd\") pod \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\" (UID: \"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c\") " Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.414473 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" (UID: "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.414822 4786 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.414921 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" (UID: "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.432699 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-kube-api-access-ktr4d" (OuterVolumeSpecName: "kube-api-access-ktr4d") pod "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" (UID: "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c"). InnerVolumeSpecName "kube-api-access-ktr4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.432793 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-scripts" (OuterVolumeSpecName: "scripts") pod "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" (UID: "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.474607 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" (UID: "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.517587 4786 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.517633 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.517643 4786 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.517654 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktr4d\" (UniqueName: \"kubernetes.io/projected/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-kube-api-access-ktr4d\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.548295 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" (UID: "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.570588 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-config-data" (OuterVolumeSpecName: "config-data") pod "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" (UID: "e8cb19cf-7a6e-4af6-9066-296ed58a7e1c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.619976 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.620043 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.958160 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"e8cb19cf-7a6e-4af6-9066-296ed58a7e1c","Type":"ContainerDied","Data":"b355b6181ff30369c3cddac90e3f0ab4962c3ed3254f9c56bd6480b7d933bdca"} Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.958235 4786 scope.go:117] "RemoveContainer" containerID="6983231d2f312d0bd8fa9d560b5b2f4ac17c19e0798a3ba4128861853cd6d3a6" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.958275 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.962726 4786 generic.go:334] "Generic (PLEG): container finished" podID="f5beb8b6-6d1a-45cd-94cf-81754c4db040" containerID="d0e5af582f2527fa44cbc360beb6e1ed261a97e02adde2d5bb32a39990a0d1af" exitCode=0 Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.962812 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-q4p7n" event={"ID":"f5beb8b6-6d1a-45cd-94cf-81754c4db040","Type":"ContainerDied","Data":"d0e5af582f2527fa44cbc360beb6e1ed261a97e02adde2d5bb32a39990a0d1af"} Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.968291 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"8d930edc-ed97-418e-a47a-60f38b734a50","Type":"ContainerDied","Data":"3ee8f6a61ebe0cca99ca095208c9fe87eddcbbb6c4f62bbd6d4df863e6252c86"} Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.968589 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 09 09:08:43 crc kubenswrapper[4786]: I1209 09:08:43.998996 4786 scope.go:117] "RemoveContainer" containerID="0af9fb7f4861e81c08836c1182d09f580658438fbfe713da04456fd6dc223978" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.018002 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.033317 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.050480 4786 scope.go:117] "RemoveContainer" containerID="b59821dc7253de6fc73974d2d70e1f7def808ce609e0d6f6e2db48bc977c73b1" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.065415 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.081078 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.094151 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:44 crc kubenswrapper[4786]: E1209 09:08:44.094910 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerName="neutron-api" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.094935 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerName="neutron-api" Dec 09 09:08:44 crc kubenswrapper[4786]: E1209 09:08:44.094948 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerName="neutron-httpd" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.094957 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerName="neutron-httpd" Dec 09 09:08:44 crc kubenswrapper[4786]: E1209 09:08:44.094972 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.094983 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" Dec 09 09:08:44 crc kubenswrapper[4786]: E1209 09:08:44.095002 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="proxy-httpd" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.095014 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="proxy-httpd" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.095006 4786 scope.go:117] "RemoveContainer" containerID="9ed159b69fc88b7f1cff985ea110a6e5c6609918c273987d1f171c8fb2a31af0" Dec 09 09:08:44 crc kubenswrapper[4786]: E1209 09:08:44.095028 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.095301 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" Dec 09 09:08:44 crc kubenswrapper[4786]: E1209 09:08:44.095349 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="ceilometer-notification-agent" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.095359 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="ceilometer-notification-agent" Dec 09 09:08:44 crc kubenswrapper[4786]: E1209 09:08:44.095412 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="sg-core" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.095440 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="sg-core" Dec 09 09:08:44 crc kubenswrapper[4786]: E1209 09:08:44.095470 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="ceilometer-central-agent" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.095481 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="ceilometer-central-agent" Dec 09 09:08:44 crc kubenswrapper[4786]: E1209 09:08:44.095495 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.095504 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.096773 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerName="neutron-httpd" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.096814 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.096828 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="ceilometer-central-agent" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.096852 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.096861 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="proxy-httpd" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.096876 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" containerName="watcher-decision-engine" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.096893 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="sg-core" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.096906 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" containerName="ceilometer-notification-agent" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.096925 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1bf94f1-1c69-4999-bd4f-202175e2df7c" containerName="neutron-api" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.123218 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.124074 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.124596 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.127010 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.127326 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.127566 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.138995 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.144273 4786 scope.go:117] "RemoveContainer" containerID="72bf17d8a5e951a264fb49cd4989b4c17935f80254bfac85b2b6f5cfd404a42c" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.157863 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.236036 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.236092 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ffd12d-d64c-461b-a3c1-271d523a8de6-config-data\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.236125 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-run-httpd\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.236682 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-log-httpd\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.236813 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-config-data\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.236961 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ffd12d-d64c-461b-a3c1-271d523a8de6-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.236995 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ffd12d-d64c-461b-a3c1-271d523a8de6-logs\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.237013 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czvpr\" (UniqueName: \"kubernetes.io/projected/61ffd12d-d64c-461b-a3c1-271d523a8de6-kube-api-access-czvpr\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.237134 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-scripts\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.237192 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.237251 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7cqr\" (UniqueName: \"kubernetes.io/projected/c99dffbb-825d-4a73-b225-a4660ef65ff9-kube-api-access-l7cqr\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.237285 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/61ffd12d-d64c-461b-a3c1-271d523a8de6-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.338632 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-scripts\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.338714 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.338747 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7cqr\" (UniqueName: \"kubernetes.io/projected/c99dffbb-825d-4a73-b225-a4660ef65ff9-kube-api-access-l7cqr\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.338769 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/61ffd12d-d64c-461b-a3c1-271d523a8de6-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.338866 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.338904 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ffd12d-d64c-461b-a3c1-271d523a8de6-config-data\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.338941 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-run-httpd\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.339899 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-run-httpd\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.340172 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-log-httpd\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.340204 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-config-data\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.340242 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ffd12d-d64c-461b-a3c1-271d523a8de6-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.340264 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czvpr\" (UniqueName: \"kubernetes.io/projected/61ffd12d-d64c-461b-a3c1-271d523a8de6-kube-api-access-czvpr\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.340287 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ffd12d-d64c-461b-a3c1-271d523a8de6-logs\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.340640 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/61ffd12d-d64c-461b-a3c1-271d523a8de6-logs\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.341186 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-log-httpd\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.345698 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-scripts\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.345920 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-config-data\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.346350 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.346537 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ffd12d-d64c-461b-a3c1-271d523a8de6-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.346874 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/61ffd12d-d64c-461b-a3c1-271d523a8de6-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.351037 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.352591 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ffd12d-d64c-461b-a3c1-271d523a8de6-config-data\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.408147 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7cqr\" (UniqueName: \"kubernetes.io/projected/c99dffbb-825d-4a73-b225-a4660ef65ff9-kube-api-access-l7cqr\") pod \"ceilometer-0\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.412355 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czvpr\" (UniqueName: \"kubernetes.io/projected/61ffd12d-d64c-461b-a3c1-271d523a8de6-kube-api-access-czvpr\") pod \"watcher-decision-engine-0\" (UID: \"61ffd12d-d64c-461b-a3c1-271d523a8de6\") " pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.461158 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.473384 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 09 09:08:44 crc kubenswrapper[4786]: I1209 09:08:44.995547 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.128831 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.226375 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d930edc-ed97-418e-a47a-60f38b734a50" path="/var/lib/kubelet/pods/8d930edc-ed97-418e-a47a-60f38b734a50/volumes" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.232912 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8cb19cf-7a6e-4af6-9066-296ed58a7e1c" path="/var/lib/kubelet/pods/e8cb19cf-7a6e-4af6-9066-296ed58a7e1c/volumes" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.291312 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.363212 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-config-data\") pod \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.363407 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zncdk\" (UniqueName: \"kubernetes.io/projected/f5beb8b6-6d1a-45cd-94cf-81754c4db040-kube-api-access-zncdk\") pod \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.363559 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-scripts\") pod \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.363654 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-combined-ca-bundle\") pod \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\" (UID: \"f5beb8b6-6d1a-45cd-94cf-81754c4db040\") " Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.370697 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5beb8b6-6d1a-45cd-94cf-81754c4db040-kube-api-access-zncdk" (OuterVolumeSpecName: "kube-api-access-zncdk") pod "f5beb8b6-6d1a-45cd-94cf-81754c4db040" (UID: "f5beb8b6-6d1a-45cd-94cf-81754c4db040"). InnerVolumeSpecName "kube-api-access-zncdk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.370939 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-scripts" (OuterVolumeSpecName: "scripts") pod "f5beb8b6-6d1a-45cd-94cf-81754c4db040" (UID: "f5beb8b6-6d1a-45cd-94cf-81754c4db040"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.422943 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f5beb8b6-6d1a-45cd-94cf-81754c4db040" (UID: "f5beb8b6-6d1a-45cd-94cf-81754c4db040"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.423124 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-config-data" (OuterVolumeSpecName: "config-data") pod "f5beb8b6-6d1a-45cd-94cf-81754c4db040" (UID: "f5beb8b6-6d1a-45cd-94cf-81754c4db040"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.466837 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.466895 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.466911 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zncdk\" (UniqueName: \"kubernetes.io/projected/f5beb8b6-6d1a-45cd-94cf-81754c4db040-kube-api-access-zncdk\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:45 crc kubenswrapper[4786]: I1209 09:08:45.466968 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f5beb8b6-6d1a-45cd-94cf-81754c4db040-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.011748 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-q4p7n" event={"ID":"f5beb8b6-6d1a-45cd-94cf-81754c4db040","Type":"ContainerDied","Data":"e02eb34bcd202092eecc058d5da78df27d9e2a6519dfbf91e0b0ebe37c2c8e5d"} Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.014095 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e02eb34bcd202092eecc058d5da78df27d9e2a6519dfbf91e0b0ebe37c2c8e5d" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.013028 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-q4p7n" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.018908 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerStarted","Data":"983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53"} Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.018978 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerStarted","Data":"fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2"} Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.018999 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerStarted","Data":"54a09141972cafbe43e89f5b685845c4e33fbc44f6f45d63196a6f84303fbd1b"} Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.020696 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"61ffd12d-d64c-461b-a3c1-271d523a8de6","Type":"ContainerStarted","Data":"5f124cb2a474280e0e6edf6c1d5d6c0e212632d45ec7ffb11b83aea67304f997"} Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.020756 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"61ffd12d-d64c-461b-a3c1-271d523a8de6","Type":"ContainerStarted","Data":"919f94c7d49c0e98429751315029c5bc449cc2d3cd5ea44548dcac0bc63cc7da"} Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.058367 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.058344183 podStartE2EDuration="2.058344183s" podCreationTimestamp="2025-12-09 09:08:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:08:46.046201542 +0000 UTC m=+1491.929822788" watchObservedRunningTime="2025-12-09 09:08:46.058344183 +0000 UTC m=+1491.941965429" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.151825 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 09:08:46 crc kubenswrapper[4786]: E1209 09:08:46.152554 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f5beb8b6-6d1a-45cd-94cf-81754c4db040" containerName="nova-cell0-conductor-db-sync" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.152642 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5beb8b6-6d1a-45cd-94cf-81754c4db040" containerName="nova-cell0-conductor-db-sync" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.152958 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f5beb8b6-6d1a-45cd-94cf-81754c4db040" containerName="nova-cell0-conductor-db-sync" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.153725 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.158146 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-dcl2b" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.158569 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.183136 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l466k\" (UniqueName: \"kubernetes.io/projected/93a4b0d1-a277-448d-bab3-03c7131f23bf-kube-api-access-l466k\") pod \"nova-cell0-conductor-0\" (UID: \"93a4b0d1-a277-448d-bab3-03c7131f23bf\") " pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.183982 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a4b0d1-a277-448d-bab3-03c7131f23bf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"93a4b0d1-a277-448d-bab3-03c7131f23bf\") " pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.184153 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a4b0d1-a277-448d-bab3-03c7131f23bf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"93a4b0d1-a277-448d-bab3-03c7131f23bf\") " pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.189755 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.286928 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a4b0d1-a277-448d-bab3-03c7131f23bf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"93a4b0d1-a277-448d-bab3-03c7131f23bf\") " pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.287003 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a4b0d1-a277-448d-bab3-03c7131f23bf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"93a4b0d1-a277-448d-bab3-03c7131f23bf\") " pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.287205 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l466k\" (UniqueName: \"kubernetes.io/projected/93a4b0d1-a277-448d-bab3-03c7131f23bf-kube-api-access-l466k\") pod \"nova-cell0-conductor-0\" (UID: \"93a4b0d1-a277-448d-bab3-03c7131f23bf\") " pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.292688 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93a4b0d1-a277-448d-bab3-03c7131f23bf-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"93a4b0d1-a277-448d-bab3-03c7131f23bf\") " pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.294825 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93a4b0d1-a277-448d-bab3-03c7131f23bf-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"93a4b0d1-a277-448d-bab3-03c7131f23bf\") " pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.309261 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l466k\" (UniqueName: \"kubernetes.io/projected/93a4b0d1-a277-448d-bab3-03c7131f23bf-kube-api-access-l466k\") pod \"nova-cell0-conductor-0\" (UID: \"93a4b0d1-a277-448d-bab3-03c7131f23bf\") " pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:46 crc kubenswrapper[4786]: I1209 09:08:46.533609 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:47 crc kubenswrapper[4786]: I1209 09:08:47.028592 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 09 09:08:47 crc kubenswrapper[4786]: I1209 09:08:47.037416 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"93a4b0d1-a277-448d-bab3-03c7131f23bf","Type":"ContainerStarted","Data":"c1d3e06caf3a3f39f699780ce2995a863101a8295cbab85bef0bf68c3090fa45"} Dec 09 09:08:47 crc kubenswrapper[4786]: I1209 09:08:47.044737 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerStarted","Data":"c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f"} Dec 09 09:08:48 crc kubenswrapper[4786]: I1209 09:08:48.059965 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerStarted","Data":"b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896"} Dec 09 09:08:48 crc kubenswrapper[4786]: I1209 09:08:48.062353 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 09:08:48 crc kubenswrapper[4786]: I1209 09:08:48.064357 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"93a4b0d1-a277-448d-bab3-03c7131f23bf","Type":"ContainerStarted","Data":"13511e1ae8c833779fc22300a9e01690f1383952ed3423587b51d5ccf3e40b6a"} Dec 09 09:08:48 crc kubenswrapper[4786]: I1209 09:08:48.065049 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:48 crc kubenswrapper[4786]: I1209 09:08:48.091819 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.741242373 podStartE2EDuration="4.091786858s" podCreationTimestamp="2025-12-09 09:08:44 +0000 UTC" firstStartedPulling="2025-12-09 09:08:45.00356315 +0000 UTC m=+1490.887184376" lastFinishedPulling="2025-12-09 09:08:47.354107635 +0000 UTC m=+1493.237728861" observedRunningTime="2025-12-09 09:08:48.082071146 +0000 UTC m=+1493.965692382" watchObservedRunningTime="2025-12-09 09:08:48.091786858 +0000 UTC m=+1493.975408084" Dec 09 09:08:48 crc kubenswrapper[4786]: I1209 09:08:48.111875 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.111842926 podStartE2EDuration="2.111842926s" podCreationTimestamp="2025-12-09 09:08:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:08:48.10316167 +0000 UTC m=+1493.986782926" watchObservedRunningTime="2025-12-09 09:08:48.111842926 +0000 UTC m=+1493.995464152" Dec 09 09:08:54 crc kubenswrapper[4786]: I1209 09:08:54.474094 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 09 09:08:54 crc kubenswrapper[4786]: I1209 09:08:54.521358 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 09 09:08:55 crc kubenswrapper[4786]: I1209 09:08:55.154418 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 09 09:08:55 crc kubenswrapper[4786]: I1209 09:08:55.223786 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 09 09:08:56 crc kubenswrapper[4786]: I1209 09:08:56.580994 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.172034 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-jbbsf"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.173966 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.176564 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.180126 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.208675 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jbbsf"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.365171 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-config-data\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.365270 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-scripts\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.365455 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.365708 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psh7d\" (UniqueName: \"kubernetes.io/projected/eb69b433-50fe-4d72-8cba-96a73e6cc10d-kube-api-access-psh7d\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.400855 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.402722 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.417702 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.425465 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.471247 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-scripts\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.471371 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.471413 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-config-data\") pod \"nova-scheduler-0\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.471494 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psh7d\" (UniqueName: \"kubernetes.io/projected/eb69b433-50fe-4d72-8cba-96a73e6cc10d-kube-api-access-psh7d\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.471620 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-config-data\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.471643 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx58z\" (UniqueName: \"kubernetes.io/projected/ee683733-fa41-4b9a-a2f8-0a96d13d489a-kube-api-access-wx58z\") pod \"nova-scheduler-0\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.471709 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.503336 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-config-data\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.507505 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-scripts\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.508116 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.515149 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psh7d\" (UniqueName: \"kubernetes.io/projected/eb69b433-50fe-4d72-8cba-96a73e6cc10d-kube-api-access-psh7d\") pod \"nova-cell0-cell-mapping-jbbsf\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.539268 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.541264 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.550989 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.571501 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.573096 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.573224 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-config-data\") pod \"nova-scheduler-0\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.573361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx58z\" (UniqueName: \"kubernetes.io/projected/ee683733-fa41-4b9a-a2f8-0a96d13d489a-kube-api-access-wx58z\") pod \"nova-scheduler-0\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.587741 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-config-data\") pod \"nova-scheduler-0\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.593390 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.593586 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.596197 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.601875 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.617507 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.635971 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lsvsn"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.657933 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx58z\" (UniqueName: \"kubernetes.io/projected/ee683733-fa41-4b9a-a2f8-0a96d13d489a-kube-api-access-wx58z\") pod \"nova-scheduler-0\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.659527 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.737254 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lsvsn"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.742143 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.755380 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.755522 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4xrv\" (UniqueName: \"kubernetes.io/projected/55dc8742-0786-4103-9136-b8406e4fd914-kube-api-access-r4xrv\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.755628 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-utilities\") pod \"community-operators-lsvsn\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.755669 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc8742-0786-4103-9136-b8406e4fd914-logs\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.755722 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-config-data\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.755747 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v77cn\" (UniqueName: \"kubernetes.io/projected/bbcc743e-9b17-423c-988b-da68abf2f608-kube-api-access-v77cn\") pod \"community-operators-lsvsn\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.755872 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.755933 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-catalog-content\") pod \"community-operators-lsvsn\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.755972 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.756016 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4wzq\" (UniqueName: \"kubernetes.io/projected/3a05d765-9884-42f9-a0b1-2f475a88fd46-kube-api-access-d4wzq\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.798411 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.875974 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-utilities\") pod \"community-operators-lsvsn\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.876019 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc8742-0786-4103-9136-b8406e4fd914-logs\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.876058 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v77cn\" (UniqueName: \"kubernetes.io/projected/bbcc743e-9b17-423c-988b-da68abf2f608-kube-api-access-v77cn\") pod \"community-operators-lsvsn\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.876079 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-config-data\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.876145 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.876172 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-catalog-content\") pod \"community-operators-lsvsn\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.876190 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.876209 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4wzq\" (UniqueName: \"kubernetes.io/projected/3a05d765-9884-42f9-a0b1-2f475a88fd46-kube-api-access-d4wzq\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.876303 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.876360 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4xrv\" (UniqueName: \"kubernetes.io/projected/55dc8742-0786-4103-9136-b8406e4fd914-kube-api-access-r4xrv\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.893531 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc8742-0786-4103-9136-b8406e4fd914-logs\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.902347 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.904295 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-utilities\") pod \"community-operators-lsvsn\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.904903 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-catalog-content\") pod \"community-operators-lsvsn\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.909928 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4xrv\" (UniqueName: \"kubernetes.io/projected/55dc8742-0786-4103-9136-b8406e4fd914-kube-api-access-r4xrv\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.919664 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.923313 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.923314 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v77cn\" (UniqueName: \"kubernetes.io/projected/bbcc743e-9b17-423c-988b-da68abf2f608-kube-api-access-v77cn\") pod \"community-operators-lsvsn\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.926328 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.927027 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.927551 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.928214 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4wzq\" (UniqueName: \"kubernetes.io/projected/3a05d765-9884-42f9-a0b1-2f475a88fd46-kube-api-access-d4wzq\") pod \"nova-cell1-novncproxy-0\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.931660 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-config-data\") pod \"nova-api-0\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " pod="openstack/nova-api-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.982772 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.984709 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-config-data\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.984782 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.984956 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f1a0aba-cb9d-4116-b10f-8799027f32cc-logs\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:57 crc kubenswrapper[4786]: I1209 09:08:57.984983 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28c9c\" (UniqueName: \"kubernetes.io/projected/2f1a0aba-cb9d-4116-b10f-8799027f32cc-kube-api-access-28c9c\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.004230 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6845884987-mlcxv"] Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.026081 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6845884987-mlcxv"] Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.029958 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.074164 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.090711 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-svc\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.091164 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-swift-storage-0\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.091191 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-sb\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.091221 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vglrg\" (UniqueName: \"kubernetes.io/projected/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-kube-api-access-vglrg\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.091290 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-nb\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.091324 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-config\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.091366 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f1a0aba-cb9d-4116-b10f-8799027f32cc-logs\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.091394 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28c9c\" (UniqueName: \"kubernetes.io/projected/2f1a0aba-cb9d-4116-b10f-8799027f32cc-kube-api-access-28c9c\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.091563 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-config-data\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.092248 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f1a0aba-cb9d-4116-b10f-8799027f32cc-logs\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.092849 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.098817 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.122941 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.129450 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-config-data\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.129989 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28c9c\" (UniqueName: \"kubernetes.io/projected/2f1a0aba-cb9d-4116-b10f-8799027f32cc-kube-api-access-28c9c\") pod \"nova-metadata-0\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.137052 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.200816 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-svc\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.200896 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-swift-storage-0\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.200922 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-sb\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.200952 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vglrg\" (UniqueName: \"kubernetes.io/projected/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-kube-api-access-vglrg\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.201114 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-nb\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.201189 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-config\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.201354 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-svc\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.204202 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-swift-storage-0\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.205167 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-config\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.206665 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-sb\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.206775 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-nb\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.219456 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vglrg\" (UniqueName: \"kubernetes.io/projected/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-kube-api-access-vglrg\") pod \"dnsmasq-dns-6845884987-mlcxv\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.278322 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.430788 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.692714 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-jbbsf"] Dec 09 09:08:58 crc kubenswrapper[4786]: I1209 09:08:58.797281 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.269218 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jbbsf" event={"ID":"eb69b433-50fe-4d72-8cba-96a73e6cc10d","Type":"ContainerStarted","Data":"7714154a3fb2a807049428956ce3fd87ab1589b13a9ca42d7b93c2253bddef68"} Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.272550 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee683733-fa41-4b9a-a2f8-0a96d13d489a","Type":"ContainerStarted","Data":"733eb3aaed93c36bfaff9c9d55aaca08a97c8ac5de89d06004f6f5682f9f5670"} Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.369043 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.391192 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 09:08:59 crc kubenswrapper[4786]: W1209 09:08:59.410755 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3a05d765_9884_42f9_a0b1_2f475a88fd46.slice/crio-d7aec3dc4c5599bd9441ca0eaa5eccd56a80d89ca1a8eeaa2a1cacca36c5511c WatchSource:0}: Error finding container d7aec3dc4c5599bd9441ca0eaa5eccd56a80d89ca1a8eeaa2a1cacca36c5511c: Status 404 returned error can't find the container with id d7aec3dc4c5599bd9441ca0eaa5eccd56a80d89ca1a8eeaa2a1cacca36c5511c Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.561360 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lsvsn"] Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.579540 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.675974 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-glsgk"] Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.678400 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.688681 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.689623 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.690573 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-glsgk"] Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.767929 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-config-data\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.768183 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585r4\" (UniqueName: \"kubernetes.io/projected/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-kube-api-access-585r4\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.768325 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-scripts\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.768386 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.772603 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6845884987-mlcxv"] Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.870666 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-config-data\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.871273 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-585r4\" (UniqueName: \"kubernetes.io/projected/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-kube-api-access-585r4\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.871335 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-scripts\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.871361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.885900 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.888088 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-scripts\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.898300 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-585r4\" (UniqueName: \"kubernetes.io/projected/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-kube-api-access-585r4\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:08:59 crc kubenswrapper[4786]: I1209 09:08:59.900740 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-config-data\") pod \"nova-cell1-conductor-db-sync-glsgk\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.042086 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.333225 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6845884987-mlcxv" event={"ID":"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2","Type":"ContainerStarted","Data":"2bf114f8dc49580e118c7e370a5dbaeccf0d31900ad012029380e9a6a21431cf"} Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.353856 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f1a0aba-cb9d-4116-b10f-8799027f32cc","Type":"ContainerStarted","Data":"ea09d082e7b2f4ff0cd70466928e7dde698ccc4787957d9e099df6d5df2ec049"} Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.363888 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55dc8742-0786-4103-9136-b8406e4fd914","Type":"ContainerStarted","Data":"2d26c4b8cab31e4616c12308d5ffa732dd5c66db96b340097395f860761651d9"} Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.390396 4786 generic.go:334] "Generic (PLEG): container finished" podID="bbcc743e-9b17-423c-988b-da68abf2f608" containerID="4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7" exitCode=0 Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.390516 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsvsn" event={"ID":"bbcc743e-9b17-423c-988b-da68abf2f608","Type":"ContainerDied","Data":"4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7"} Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.390558 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsvsn" event={"ID":"bbcc743e-9b17-423c-988b-da68abf2f608","Type":"ContainerStarted","Data":"d174b5db1927ec4e2a9ab90486131f375a2229e140a309d553ebaa9262ec76f1"} Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.417098 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jbbsf" event={"ID":"eb69b433-50fe-4d72-8cba-96a73e6cc10d","Type":"ContainerStarted","Data":"253a504921a036b1977aafd6d2220bfa1d435624b1eb322b488499922e89a3d3"} Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.422021 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a05d765-9884-42f9-a0b1-2f475a88fd46","Type":"ContainerStarted","Data":"d7aec3dc4c5599bd9441ca0eaa5eccd56a80d89ca1a8eeaa2a1cacca36c5511c"} Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.468021 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-jbbsf" podStartSLOduration=3.467997036 podStartE2EDuration="3.467997036s" podCreationTimestamp="2025-12-09 09:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:00.449387494 +0000 UTC m=+1506.333008720" watchObservedRunningTime="2025-12-09 09:09:00.467997036 +0000 UTC m=+1506.351618252" Dec 09 09:09:00 crc kubenswrapper[4786]: I1209 09:09:00.729628 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-glsgk"] Dec 09 09:09:00 crc kubenswrapper[4786]: W1209 09:09:00.757204 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b2cc8bb_e766_4324_afcb_8ad0655bd96f.slice/crio-5a49ca59cb8f4f755de44f9e5693bd65d3e2cc26ee2afb8127213f0440512f31 WatchSource:0}: Error finding container 5a49ca59cb8f4f755de44f9e5693bd65d3e2cc26ee2afb8127213f0440512f31: Status 404 returned error can't find the container with id 5a49ca59cb8f4f755de44f9e5693bd65d3e2cc26ee2afb8127213f0440512f31 Dec 09 09:09:01 crc kubenswrapper[4786]: I1209 09:09:01.443904 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-glsgk" event={"ID":"8b2cc8bb-e766-4324-afcb-8ad0655bd96f","Type":"ContainerStarted","Data":"5a49ca59cb8f4f755de44f9e5693bd65d3e2cc26ee2afb8127213f0440512f31"} Dec 09 09:09:01 crc kubenswrapper[4786]: I1209 09:09:01.449434 4786 generic.go:334] "Generic (PLEG): container finished" podID="9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" containerID="36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab" exitCode=0 Dec 09 09:09:01 crc kubenswrapper[4786]: I1209 09:09:01.449659 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6845884987-mlcxv" event={"ID":"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2","Type":"ContainerDied","Data":"36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab"} Dec 09 09:09:02 crc kubenswrapper[4786]: I1209 09:09:02.119862 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 09:09:02 crc kubenswrapper[4786]: I1209 09:09:02.136563 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:03 crc kubenswrapper[4786]: I1209 09:09:03.509702 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-glsgk" event={"ID":"8b2cc8bb-e766-4324-afcb-8ad0655bd96f","Type":"ContainerStarted","Data":"2f48a1c3f00dd62cee43968e44376a46b5b8dd07d15d425d49393b298cff42ab"} Dec 09 09:09:03 crc kubenswrapper[4786]: I1209 09:09:03.541564 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-glsgk" podStartSLOduration=4.541531998 podStartE2EDuration="4.541531998s" podCreationTimestamp="2025-12-09 09:08:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:03.532394942 +0000 UTC m=+1509.416016168" watchObservedRunningTime="2025-12-09 09:09:03.541531998 +0000 UTC m=+1509.425153244" Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.522484 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6845884987-mlcxv" event={"ID":"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2","Type":"ContainerStarted","Data":"1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6"} Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.523885 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.525192 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f1a0aba-cb9d-4116-b10f-8799027f32cc","Type":"ContainerStarted","Data":"bfb3345340985bad4cd21601504da4ecc36e4b42456e364b55a49e090974c054"} Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.525291 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f1a0aba-cb9d-4116-b10f-8799027f32cc","Type":"ContainerStarted","Data":"083c099d68fba337a44dc846cdf6a85580c67ca3c80de122c691b280d43764d5"} Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.525500 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerName="nova-metadata-log" containerID="cri-o://083c099d68fba337a44dc846cdf6a85580c67ca3c80de122c691b280d43764d5" gracePeriod=30 Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.525872 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerName="nova-metadata-metadata" containerID="cri-o://bfb3345340985bad4cd21601504da4ecc36e4b42456e364b55a49e090974c054" gracePeriod=30 Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.528749 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee683733-fa41-4b9a-a2f8-0a96d13d489a","Type":"ContainerStarted","Data":"d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038"} Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.534250 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55dc8742-0786-4103-9136-b8406e4fd914","Type":"ContainerStarted","Data":"e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be"} Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.534305 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55dc8742-0786-4103-9136-b8406e4fd914","Type":"ContainerStarted","Data":"e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7"} Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.544542 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsvsn" event={"ID":"bbcc743e-9b17-423c-988b-da68abf2f608","Type":"ContainerStarted","Data":"7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895"} Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.553976 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6845884987-mlcxv" podStartSLOduration=7.553954508 podStartE2EDuration="7.553954508s" podCreationTimestamp="2025-12-09 09:08:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:04.549342394 +0000 UTC m=+1510.432963640" watchObservedRunningTime="2025-12-09 09:09:04.553954508 +0000 UTC m=+1510.437575744" Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.570199 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a05d765-9884-42f9-a0b1-2f475a88fd46","Type":"ContainerStarted","Data":"263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c"} Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.570525 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="3a05d765-9884-42f9-a0b1-2f475a88fd46" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c" gracePeriod=30 Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.581632 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.7680751040000002 podStartE2EDuration="7.581608116s" podCreationTimestamp="2025-12-09 09:08:57 +0000 UTC" firstStartedPulling="2025-12-09 09:08:59.392761225 +0000 UTC m=+1505.276382441" lastFinishedPulling="2025-12-09 09:09:03.206294227 +0000 UTC m=+1509.089915453" observedRunningTime="2025-12-09 09:09:04.578746895 +0000 UTC m=+1510.462368121" watchObservedRunningTime="2025-12-09 09:09:04.581608116 +0000 UTC m=+1510.465229352" Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.632245 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.349978623 podStartE2EDuration="7.632219963s" podCreationTimestamp="2025-12-09 09:08:57 +0000 UTC" firstStartedPulling="2025-12-09 09:08:58.85335255 +0000 UTC m=+1504.736973776" lastFinishedPulling="2025-12-09 09:09:03.13559389 +0000 UTC m=+1509.019215116" observedRunningTime="2025-12-09 09:09:04.620497982 +0000 UTC m=+1510.504119218" watchObservedRunningTime="2025-12-09 09:09:04.632219963 +0000 UTC m=+1510.515841189" Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.715380 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.983168649 podStartE2EDuration="7.71535897s" podCreationTimestamp="2025-12-09 09:08:57 +0000 UTC" firstStartedPulling="2025-12-09 09:08:59.433768724 +0000 UTC m=+1505.317389950" lastFinishedPulling="2025-12-09 09:09:03.165959045 +0000 UTC m=+1509.049580271" observedRunningTime="2025-12-09 09:09:04.713207716 +0000 UTC m=+1510.596828942" watchObservedRunningTime="2025-12-09 09:09:04.71535897 +0000 UTC m=+1510.598980196" Dec 09 09:09:04 crc kubenswrapper[4786]: I1209 09:09:04.728937 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.122698286 podStartE2EDuration="7.728907616s" podCreationTimestamp="2025-12-09 09:08:57 +0000 UTC" firstStartedPulling="2025-12-09 09:08:59.590352885 +0000 UTC m=+1505.473974111" lastFinishedPulling="2025-12-09 09:09:03.196562215 +0000 UTC m=+1509.080183441" observedRunningTime="2025-12-09 09:09:04.686938003 +0000 UTC m=+1510.570559229" watchObservedRunningTime="2025-12-09 09:09:04.728907616 +0000 UTC m=+1510.612528842" Dec 09 09:09:05 crc kubenswrapper[4786]: I1209 09:09:05.586252 4786 generic.go:334] "Generic (PLEG): container finished" podID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerID="083c099d68fba337a44dc846cdf6a85580c67ca3c80de122c691b280d43764d5" exitCode=143 Dec 09 09:09:05 crc kubenswrapper[4786]: I1209 09:09:05.586311 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f1a0aba-cb9d-4116-b10f-8799027f32cc","Type":"ContainerDied","Data":"083c099d68fba337a44dc846cdf6a85580c67ca3c80de122c691b280d43764d5"} Dec 09 09:09:06 crc kubenswrapper[4786]: I1209 09:09:06.599322 4786 generic.go:334] "Generic (PLEG): container finished" podID="bbcc743e-9b17-423c-988b-da68abf2f608" containerID="7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895" exitCode=0 Dec 09 09:09:06 crc kubenswrapper[4786]: I1209 09:09:06.599461 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsvsn" event={"ID":"bbcc743e-9b17-423c-988b-da68abf2f608","Type":"ContainerDied","Data":"7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895"} Dec 09 09:09:06 crc kubenswrapper[4786]: I1209 09:09:06.603030 4786 generic.go:334] "Generic (PLEG): container finished" podID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerID="bfb3345340985bad4cd21601504da4ecc36e4b42456e364b55a49e090974c054" exitCode=0 Dec 09 09:09:06 crc kubenswrapper[4786]: I1209 09:09:06.603082 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f1a0aba-cb9d-4116-b10f-8799027f32cc","Type":"ContainerDied","Data":"bfb3345340985bad4cd21601504da4ecc36e4b42456e364b55a49e090974c054"} Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.121961 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.191608 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28c9c\" (UniqueName: \"kubernetes.io/projected/2f1a0aba-cb9d-4116-b10f-8799027f32cc-kube-api-access-28c9c\") pod \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.191981 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-config-data\") pod \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.192186 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-combined-ca-bundle\") pod \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.192404 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f1a0aba-cb9d-4116-b10f-8799027f32cc-logs\") pod \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\" (UID: \"2f1a0aba-cb9d-4116-b10f-8799027f32cc\") " Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.193989 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f1a0aba-cb9d-4116-b10f-8799027f32cc-logs" (OuterVolumeSpecName: "logs") pod "2f1a0aba-cb9d-4116-b10f-8799027f32cc" (UID: "2f1a0aba-cb9d-4116-b10f-8799027f32cc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.200402 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f1a0aba-cb9d-4116-b10f-8799027f32cc-kube-api-access-28c9c" (OuterVolumeSpecName: "kube-api-access-28c9c") pod "2f1a0aba-cb9d-4116-b10f-8799027f32cc" (UID: "2f1a0aba-cb9d-4116-b10f-8799027f32cc"). InnerVolumeSpecName "kube-api-access-28c9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.227581 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2f1a0aba-cb9d-4116-b10f-8799027f32cc" (UID: "2f1a0aba-cb9d-4116-b10f-8799027f32cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.230124 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-config-data" (OuterVolumeSpecName: "config-data") pod "2f1a0aba-cb9d-4116-b10f-8799027f32cc" (UID: "2f1a0aba-cb9d-4116-b10f-8799027f32cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.297192 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2f1a0aba-cb9d-4116-b10f-8799027f32cc-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.297528 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28c9c\" (UniqueName: \"kubernetes.io/projected/2f1a0aba-cb9d-4116-b10f-8799027f32cc-kube-api-access-28c9c\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.297547 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.297562 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2f1a0aba-cb9d-4116-b10f-8799027f32cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.615663 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"2f1a0aba-cb9d-4116-b10f-8799027f32cc","Type":"ContainerDied","Data":"ea09d082e7b2f4ff0cd70466928e7dde698ccc4787957d9e099df6d5df2ec049"} Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.615730 4786 scope.go:117] "RemoveContainer" containerID="bfb3345340985bad4cd21601504da4ecc36e4b42456e364b55a49e090974c054" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.617036 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.621630 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsvsn" event={"ID":"bbcc743e-9b17-423c-988b-da68abf2f608","Type":"ContainerStarted","Data":"20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0"} Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.647569 4786 scope.go:117] "RemoveContainer" containerID="083c099d68fba337a44dc846cdf6a85580c67ca3c80de122c691b280d43764d5" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.659121 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lsvsn" podStartSLOduration=3.91642718 podStartE2EDuration="10.659092616s" podCreationTimestamp="2025-12-09 09:08:57 +0000 UTC" firstStartedPulling="2025-12-09 09:09:00.431601152 +0000 UTC m=+1506.315222368" lastFinishedPulling="2025-12-09 09:09:07.174266568 +0000 UTC m=+1513.057887804" observedRunningTime="2025-12-09 09:09:07.640983116 +0000 UTC m=+1513.524604362" watchObservedRunningTime="2025-12-09 09:09:07.659092616 +0000 UTC m=+1513.542713862" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.691632 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.703364 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.714495 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:07 crc kubenswrapper[4786]: E1209 09:09:07.715294 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerName="nova-metadata-metadata" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.715316 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerName="nova-metadata-metadata" Dec 09 09:09:07 crc kubenswrapper[4786]: E1209 09:09:07.715349 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerName="nova-metadata-log" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.715359 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerName="nova-metadata-log" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.715765 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerName="nova-metadata-metadata" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.715795 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" containerName="nova-metadata-log" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.718155 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.721441 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.724404 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.725664 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.744440 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.744545 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.782882 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.808397 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-config-data\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.808517 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.808657 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.808678 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1989ece6-0e63-40dd-9673-61d2ad836afe-logs\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.808698 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsv7g\" (UniqueName: \"kubernetes.io/projected/1989ece6-0e63-40dd-9673-61d2ad836afe-kube-api-access-gsv7g\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.911853 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.911912 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1989ece6-0e63-40dd-9673-61d2ad836afe-logs\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.911948 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gsv7g\" (UniqueName: \"kubernetes.io/projected/1989ece6-0e63-40dd-9673-61d2ad836afe-kube-api-access-gsv7g\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.912010 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-config-data\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.912065 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.912360 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1989ece6-0e63-40dd-9673-61d2ad836afe-logs\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.917753 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.918189 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-config-data\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.918492 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:07 crc kubenswrapper[4786]: I1209 09:09:07.933024 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gsv7g\" (UniqueName: \"kubernetes.io/projected/1989ece6-0e63-40dd-9673-61d2ad836afe-kube-api-access-gsv7g\") pod \"nova-metadata-0\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " pod="openstack/nova-metadata-0" Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.058321 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.075230 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.093347 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.099575 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.137796 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.138104 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.433299 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.491260 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f47965bdf-2nd22"] Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.622027 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.667810 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" podUID="954dc35f-7a55-408f-8d7c-2ad4420dc25e" containerName="dnsmasq-dns" containerID="cri-o://d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177" gracePeriod=10 Dec 09 09:09:08 crc kubenswrapper[4786]: I1209 09:09:08.715880 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.179627 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.180768 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.201749 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lsvsn" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" containerName="registry-server" probeResult="failure" output=< Dec 09 09:09:09 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 09:09:09 crc kubenswrapper[4786]: > Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.230192 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f1a0aba-cb9d-4116-b10f-8799027f32cc" path="/var/lib/kubelet/pods/2f1a0aba-cb9d-4116-b10f-8799027f32cc/volumes" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.367009 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.467119 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hp62p\" (UniqueName: \"kubernetes.io/projected/954dc35f-7a55-408f-8d7c-2ad4420dc25e-kube-api-access-hp62p\") pod \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.467232 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-svc\") pod \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.467406 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-nb\") pod \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.467491 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-swift-storage-0\") pod \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.467538 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-config\") pod \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.467616 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-sb\") pod \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\" (UID: \"954dc35f-7a55-408f-8d7c-2ad4420dc25e\") " Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.476108 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/954dc35f-7a55-408f-8d7c-2ad4420dc25e-kube-api-access-hp62p" (OuterVolumeSpecName: "kube-api-access-hp62p") pod "954dc35f-7a55-408f-8d7c-2ad4420dc25e" (UID: "954dc35f-7a55-408f-8d7c-2ad4420dc25e"). InnerVolumeSpecName "kube-api-access-hp62p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.540500 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-config" (OuterVolumeSpecName: "config") pod "954dc35f-7a55-408f-8d7c-2ad4420dc25e" (UID: "954dc35f-7a55-408f-8d7c-2ad4420dc25e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.541749 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "954dc35f-7a55-408f-8d7c-2ad4420dc25e" (UID: "954dc35f-7a55-408f-8d7c-2ad4420dc25e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.549650 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "954dc35f-7a55-408f-8d7c-2ad4420dc25e" (UID: "954dc35f-7a55-408f-8d7c-2ad4420dc25e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.570938 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.570975 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.570988 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.571000 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hp62p\" (UniqueName: \"kubernetes.io/projected/954dc35f-7a55-408f-8d7c-2ad4420dc25e-kube-api-access-hp62p\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.575148 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "954dc35f-7a55-408f-8d7c-2ad4420dc25e" (UID: "954dc35f-7a55-408f-8d7c-2ad4420dc25e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.576662 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "954dc35f-7a55-408f-8d7c-2ad4420dc25e" (UID: "954dc35f-7a55-408f-8d7c-2ad4420dc25e"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.673238 4786 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.673287 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/954dc35f-7a55-408f-8d7c-2ad4420dc25e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.691645 4786 generic.go:334] "Generic (PLEG): container finished" podID="eb69b433-50fe-4d72-8cba-96a73e6cc10d" containerID="253a504921a036b1977aafd6d2220bfa1d435624b1eb322b488499922e89a3d3" exitCode=0 Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.691720 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jbbsf" event={"ID":"eb69b433-50fe-4d72-8cba-96a73e6cc10d","Type":"ContainerDied","Data":"253a504921a036b1977aafd6d2220bfa1d435624b1eb322b488499922e89a3d3"} Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.701403 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1989ece6-0e63-40dd-9673-61d2ad836afe","Type":"ContainerStarted","Data":"5ea7b4be9d87cfc068c130ca6199029a7ad13469c3ee3dfd9981896bcc16ec67"} Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.701481 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1989ece6-0e63-40dd-9673-61d2ad836afe","Type":"ContainerStarted","Data":"2aeb22ec04e4b57da3a9e76e251c0cf01ef3a535633d31780d902dd574675f98"} Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.701495 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1989ece6-0e63-40dd-9673-61d2ad836afe","Type":"ContainerStarted","Data":"55b12b05036ba1c3096de67afdc1d7415dfd3a646cc5d776620df4489e5e6977"} Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.705231 4786 generic.go:334] "Generic (PLEG): container finished" podID="954dc35f-7a55-408f-8d7c-2ad4420dc25e" containerID="d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177" exitCode=0 Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.705944 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.706354 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" event={"ID":"954dc35f-7a55-408f-8d7c-2ad4420dc25e","Type":"ContainerDied","Data":"d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177"} Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.706418 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f47965bdf-2nd22" event={"ID":"954dc35f-7a55-408f-8d7c-2ad4420dc25e","Type":"ContainerDied","Data":"9ce70ae7e098e2ad97669dd5ad4c52ef4b02a8fee9772680ea6c61836f2f60c9"} Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.706463 4786 scope.go:117] "RemoveContainer" containerID="d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.743093 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.743071686 podStartE2EDuration="2.743071686s" podCreationTimestamp="2025-12-09 09:09:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:09.737490197 +0000 UTC m=+1515.621111433" watchObservedRunningTime="2025-12-09 09:09:09.743071686 +0000 UTC m=+1515.626692912" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.743921 4786 scope.go:117] "RemoveContainer" containerID="00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.782871 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f47965bdf-2nd22"] Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.792577 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f47965bdf-2nd22"] Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.801329 4786 scope.go:117] "RemoveContainer" containerID="d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177" Dec 09 09:09:09 crc kubenswrapper[4786]: E1209 09:09:09.802233 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177\": container with ID starting with d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177 not found: ID does not exist" containerID="d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.802281 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177"} err="failed to get container status \"d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177\": rpc error: code = NotFound desc = could not find container \"d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177\": container with ID starting with d83a175f4722f25f24ffb6a111c27d1a69c5788ba89b2c1fc8e9d4037ef46177 not found: ID does not exist" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.802312 4786 scope.go:117] "RemoveContainer" containerID="00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2" Dec 09 09:09:09 crc kubenswrapper[4786]: E1209 09:09:09.802918 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2\": container with ID starting with 00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2 not found: ID does not exist" containerID="00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2" Dec 09 09:09:09 crc kubenswrapper[4786]: I1209 09:09:09.802958 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2"} err="failed to get container status \"00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2\": rpc error: code = NotFound desc = could not find container \"00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2\": container with ID starting with 00a82cb8a36ac9c2e30f1d9061976704a3f694fde3862de83c5d46abc9f3d3b2 not found: ID does not exist" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.078021 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.112189 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-config-data\") pod \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.112339 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psh7d\" (UniqueName: \"kubernetes.io/projected/eb69b433-50fe-4d72-8cba-96a73e6cc10d-kube-api-access-psh7d\") pod \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.112366 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-scripts\") pod \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.112580 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-combined-ca-bundle\") pod \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\" (UID: \"eb69b433-50fe-4d72-8cba-96a73e6cc10d\") " Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.120575 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb69b433-50fe-4d72-8cba-96a73e6cc10d-kube-api-access-psh7d" (OuterVolumeSpecName: "kube-api-access-psh7d") pod "eb69b433-50fe-4d72-8cba-96a73e6cc10d" (UID: "eb69b433-50fe-4d72-8cba-96a73e6cc10d"). InnerVolumeSpecName "kube-api-access-psh7d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.159612 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-config-data" (OuterVolumeSpecName: "config-data") pod "eb69b433-50fe-4d72-8cba-96a73e6cc10d" (UID: "eb69b433-50fe-4d72-8cba-96a73e6cc10d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.160690 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-scripts" (OuterVolumeSpecName: "scripts") pod "eb69b433-50fe-4d72-8cba-96a73e6cc10d" (UID: "eb69b433-50fe-4d72-8cba-96a73e6cc10d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.168875 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eb69b433-50fe-4d72-8cba-96a73e6cc10d" (UID: "eb69b433-50fe-4d72-8cba-96a73e6cc10d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.204889 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="954dc35f-7a55-408f-8d7c-2ad4420dc25e" path="/var/lib/kubelet/pods/954dc35f-7a55-408f-8d7c-2ad4420dc25e/volumes" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.215579 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.215631 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psh7d\" (UniqueName: \"kubernetes.io/projected/eb69b433-50fe-4d72-8cba-96a73e6cc10d-kube-api-access-psh7d\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.215643 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.215652 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eb69b433-50fe-4d72-8cba-96a73e6cc10d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.725916 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-jbbsf" event={"ID":"eb69b433-50fe-4d72-8cba-96a73e6cc10d","Type":"ContainerDied","Data":"7714154a3fb2a807049428956ce3fd87ab1589b13a9ca42d7b93c2253bddef68"} Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.725974 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7714154a3fb2a807049428956ce3fd87ab1589b13a9ca42d7b93c2253bddef68" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.725970 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-jbbsf" Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.916927 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.917899 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-log" containerID="cri-o://e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7" gracePeriod=30 Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.918174 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-api" containerID="cri-o://e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be" gracePeriod=30 Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.937344 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.937699 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ee683733-fa41-4b9a-a2f8-0a96d13d489a" containerName="nova-scheduler-scheduler" containerID="cri-o://d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038" gracePeriod=30 Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.983188 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.984544 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerName="nova-metadata-metadata" containerID="cri-o://5ea7b4be9d87cfc068c130ca6199029a7ad13469c3ee3dfd9981896bcc16ec67" gracePeriod=30 Dec 09 09:09:11 crc kubenswrapper[4786]: I1209 09:09:11.984786 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerName="nova-metadata-log" containerID="cri-o://2aeb22ec04e4b57da3a9e76e251c0cf01ef3a535633d31780d902dd574675f98" gracePeriod=30 Dec 09 09:09:12 crc kubenswrapper[4786]: E1209 09:09:12.747695 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 09:09:12 crc kubenswrapper[4786]: I1209 09:09:12.748164 4786 generic.go:334] "Generic (PLEG): container finished" podID="55dc8742-0786-4103-9136-b8406e4fd914" containerID="e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7" exitCode=143 Dec 09 09:09:12 crc kubenswrapper[4786]: I1209 09:09:12.748227 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55dc8742-0786-4103-9136-b8406e4fd914","Type":"ContainerDied","Data":"e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7"} Dec 09 09:09:12 crc kubenswrapper[4786]: E1209 09:09:12.750024 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 09:09:12 crc kubenswrapper[4786]: E1209 09:09:12.751658 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 09:09:12 crc kubenswrapper[4786]: E1209 09:09:12.751696 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ee683733-fa41-4b9a-a2f8-0a96d13d489a" containerName="nova-scheduler-scheduler" Dec 09 09:09:12 crc kubenswrapper[4786]: I1209 09:09:12.752469 4786 generic.go:334] "Generic (PLEG): container finished" podID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerID="5ea7b4be9d87cfc068c130ca6199029a7ad13469c3ee3dfd9981896bcc16ec67" exitCode=0 Dec 09 09:09:12 crc kubenswrapper[4786]: I1209 09:09:12.752491 4786 generic.go:334] "Generic (PLEG): container finished" podID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerID="2aeb22ec04e4b57da3a9e76e251c0cf01ef3a535633d31780d902dd574675f98" exitCode=143 Dec 09 09:09:12 crc kubenswrapper[4786]: I1209 09:09:12.752509 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1989ece6-0e63-40dd-9673-61d2ad836afe","Type":"ContainerDied","Data":"5ea7b4be9d87cfc068c130ca6199029a7ad13469c3ee3dfd9981896bcc16ec67"} Dec 09 09:09:12 crc kubenswrapper[4786]: I1209 09:09:12.752552 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1989ece6-0e63-40dd-9673-61d2ad836afe","Type":"ContainerDied","Data":"2aeb22ec04e4b57da3a9e76e251c0cf01ef3a535633d31780d902dd574675f98"} Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.059690 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.059777 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.169097 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.285044 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gsv7g\" (UniqueName: \"kubernetes.io/projected/1989ece6-0e63-40dd-9673-61d2ad836afe-kube-api-access-gsv7g\") pod \"1989ece6-0e63-40dd-9673-61d2ad836afe\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.285169 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-config-data\") pod \"1989ece6-0e63-40dd-9673-61d2ad836afe\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.285231 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-nova-metadata-tls-certs\") pod \"1989ece6-0e63-40dd-9673-61d2ad836afe\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.285264 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-combined-ca-bundle\") pod \"1989ece6-0e63-40dd-9673-61d2ad836afe\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.286509 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1989ece6-0e63-40dd-9673-61d2ad836afe-logs\") pod \"1989ece6-0e63-40dd-9673-61d2ad836afe\" (UID: \"1989ece6-0e63-40dd-9673-61d2ad836afe\") " Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.287291 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1989ece6-0e63-40dd-9673-61d2ad836afe-logs" (OuterVolumeSpecName: "logs") pod "1989ece6-0e63-40dd-9673-61d2ad836afe" (UID: "1989ece6-0e63-40dd-9673-61d2ad836afe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.292200 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1989ece6-0e63-40dd-9673-61d2ad836afe-kube-api-access-gsv7g" (OuterVolumeSpecName: "kube-api-access-gsv7g") pod "1989ece6-0e63-40dd-9673-61d2ad836afe" (UID: "1989ece6-0e63-40dd-9673-61d2ad836afe"). InnerVolumeSpecName "kube-api-access-gsv7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.317129 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1989ece6-0e63-40dd-9673-61d2ad836afe" (UID: "1989ece6-0e63-40dd-9673-61d2ad836afe"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.324517 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-config-data" (OuterVolumeSpecName: "config-data") pod "1989ece6-0e63-40dd-9673-61d2ad836afe" (UID: "1989ece6-0e63-40dd-9673-61d2ad836afe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.349116 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "1989ece6-0e63-40dd-9673-61d2ad836afe" (UID: "1989ece6-0e63-40dd-9673-61d2ad836afe"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.390816 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gsv7g\" (UniqueName: \"kubernetes.io/projected/1989ece6-0e63-40dd-9673-61d2ad836afe-kube-api-access-gsv7g\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.390974 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.390990 4786 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.391008 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1989ece6-0e63-40dd-9673-61d2ad836afe-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.391022 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1989ece6-0e63-40dd-9673-61d2ad836afe-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.766064 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"1989ece6-0e63-40dd-9673-61d2ad836afe","Type":"ContainerDied","Data":"55b12b05036ba1c3096de67afdc1d7415dfd3a646cc5d776620df4489e5e6977"} Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.766136 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.766504 4786 scope.go:117] "RemoveContainer" containerID="5ea7b4be9d87cfc068c130ca6199029a7ad13469c3ee3dfd9981896bcc16ec67" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.815522 4786 scope.go:117] "RemoveContainer" containerID="2aeb22ec04e4b57da3a9e76e251c0cf01ef3a535633d31780d902dd574675f98" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.826554 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.844225 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.869623 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:13 crc kubenswrapper[4786]: E1209 09:09:13.871044 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerName="nova-metadata-metadata" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.871071 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerName="nova-metadata-metadata" Dec 09 09:09:13 crc kubenswrapper[4786]: E1209 09:09:13.871166 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="954dc35f-7a55-408f-8d7c-2ad4420dc25e" containerName="dnsmasq-dns" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.871177 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="954dc35f-7a55-408f-8d7c-2ad4420dc25e" containerName="dnsmasq-dns" Dec 09 09:09:13 crc kubenswrapper[4786]: E1209 09:09:13.871189 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerName="nova-metadata-log" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.871202 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerName="nova-metadata-log" Dec 09 09:09:13 crc kubenswrapper[4786]: E1209 09:09:13.871247 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb69b433-50fe-4d72-8cba-96a73e6cc10d" containerName="nova-manage" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.871254 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb69b433-50fe-4d72-8cba-96a73e6cc10d" containerName="nova-manage" Dec 09 09:09:13 crc kubenswrapper[4786]: E1209 09:09:13.871268 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="954dc35f-7a55-408f-8d7c-2ad4420dc25e" containerName="init" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.871275 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="954dc35f-7a55-408f-8d7c-2ad4420dc25e" containerName="init" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.872045 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="954dc35f-7a55-408f-8d7c-2ad4420dc25e" containerName="dnsmasq-dns" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.872097 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerName="nova-metadata-metadata" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.872126 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb69b433-50fe-4d72-8cba-96a73e6cc10d" containerName="nova-manage" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.872162 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1989ece6-0e63-40dd-9673-61d2ad836afe" containerName="nova-metadata-log" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.876266 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.882587 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.885325 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.888735 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.907110 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dd88\" (UniqueName: \"kubernetes.io/projected/b1cc7239-c264-4766-a55d-51e50cea46ba-kube-api-access-5dd88\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.907165 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1cc7239-c264-4766-a55d-51e50cea46ba-logs\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.907228 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.907374 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-config-data\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:13 crc kubenswrapper[4786]: I1209 09:09:13.907402 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.010813 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dd88\" (UniqueName: \"kubernetes.io/projected/b1cc7239-c264-4766-a55d-51e50cea46ba-kube-api-access-5dd88\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.010858 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1cc7239-c264-4766-a55d-51e50cea46ba-logs\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.010911 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.010967 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-config-data\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.010993 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.011487 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1cc7239-c264-4766-a55d-51e50cea46ba-logs\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.017552 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.018563 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-config-data\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.019002 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.029666 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dd88\" (UniqueName: \"kubernetes.io/projected/b1cc7239-c264-4766-a55d-51e50cea46ba-kube-api-access-5dd88\") pod \"nova-metadata-0\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.258403 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.431214 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.468494 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.522019 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4xrv\" (UniqueName: \"kubernetes.io/projected/55dc8742-0786-4103-9136-b8406e4fd914-kube-api-access-r4xrv\") pod \"55dc8742-0786-4103-9136-b8406e4fd914\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.522106 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc8742-0786-4103-9136-b8406e4fd914-logs\") pod \"55dc8742-0786-4103-9136-b8406e4fd914\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.522239 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-config-data\") pod \"55dc8742-0786-4103-9136-b8406e4fd914\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.522380 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-combined-ca-bundle\") pod \"55dc8742-0786-4103-9136-b8406e4fd914\" (UID: \"55dc8742-0786-4103-9136-b8406e4fd914\") " Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.523587 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55dc8742-0786-4103-9136-b8406e4fd914-logs" (OuterVolumeSpecName: "logs") pod "55dc8742-0786-4103-9136-b8406e4fd914" (UID: "55dc8742-0786-4103-9136-b8406e4fd914"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.531047 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55dc8742-0786-4103-9136-b8406e4fd914-kube-api-access-r4xrv" (OuterVolumeSpecName: "kube-api-access-r4xrv") pod "55dc8742-0786-4103-9136-b8406e4fd914" (UID: "55dc8742-0786-4103-9136-b8406e4fd914"). InnerVolumeSpecName "kube-api-access-r4xrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.560647 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "55dc8742-0786-4103-9136-b8406e4fd914" (UID: "55dc8742-0786-4103-9136-b8406e4fd914"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.562388 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-config-data" (OuterVolumeSpecName: "config-data") pod "55dc8742-0786-4103-9136-b8406e4fd914" (UID: "55dc8742-0786-4103-9136-b8406e4fd914"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.625806 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.625875 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55dc8742-0786-4103-9136-b8406e4fd914-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.625897 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4xrv\" (UniqueName: \"kubernetes.io/projected/55dc8742-0786-4103-9136-b8406e4fd914-kube-api-access-r4xrv\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.625914 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55dc8742-0786-4103-9136-b8406e4fd914-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.785579 4786 generic.go:334] "Generic (PLEG): container finished" podID="55dc8742-0786-4103-9136-b8406e4fd914" containerID="e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be" exitCode=0 Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.785660 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55dc8742-0786-4103-9136-b8406e4fd914","Type":"ContainerDied","Data":"e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be"} Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.785746 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"55dc8742-0786-4103-9136-b8406e4fd914","Type":"ContainerDied","Data":"2d26c4b8cab31e4616c12308d5ffa732dd5c66db96b340097395f860761651d9"} Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.785752 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.785778 4786 scope.go:117] "RemoveContainer" containerID="e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.813519 4786 scope.go:117] "RemoveContainer" containerID="e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.848441 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.849274 4786 scope.go:117] "RemoveContainer" containerID="e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be" Dec 09 09:09:14 crc kubenswrapper[4786]: E1209 09:09:14.849825 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be\": container with ID starting with e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be not found: ID does not exist" containerID="e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.849872 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be"} err="failed to get container status \"e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be\": rpc error: code = NotFound desc = could not find container \"e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be\": container with ID starting with e767e272560c7d7c0fc9a31a3a7c53bdd06baafa8f71b8567b50ef96858e88be not found: ID does not exist" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.849900 4786 scope.go:117] "RemoveContainer" containerID="e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7" Dec 09 09:09:14 crc kubenswrapper[4786]: E1209 09:09:14.850356 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7\": container with ID starting with e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7 not found: ID does not exist" containerID="e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.850385 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7"} err="failed to get container status \"e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7\": rpc error: code = NotFound desc = could not find container \"e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7\": container with ID starting with e1f58d67dcf1419749984992168e6da5f2260481c9fa9236e2fc0d82fdf2c9c7 not found: ID does not exist" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.869587 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.888106 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.903535 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:14 crc kubenswrapper[4786]: E1209 09:09:14.904127 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-log" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.904146 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-log" Dec 09 09:09:14 crc kubenswrapper[4786]: E1209 09:09:14.904169 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-api" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.904176 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-api" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.904349 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-api" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.904378 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="55dc8742-0786-4103-9136-b8406e4fd914" containerName="nova-api-log" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.905558 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.908396 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.916945 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.943960 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.944408 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6kvv\" (UniqueName: \"kubernetes.io/projected/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-kube-api-access-g6kvv\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.944762 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-config-data\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:14 crc kubenswrapper[4786]: I1209 09:09:14.944934 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-logs\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.047003 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-config-data\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.047871 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-logs\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.047966 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.048031 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6kvv\" (UniqueName: \"kubernetes.io/projected/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-kube-api-access-g6kvv\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.048394 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-logs\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.051012 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.051753 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-config-data\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.066010 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6kvv\" (UniqueName: \"kubernetes.io/projected/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-kube-api-access-g6kvv\") pod \"nova-api-0\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.201117 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1989ece6-0e63-40dd-9673-61d2ad836afe" path="/var/lib/kubelet/pods/1989ece6-0e63-40dd-9673-61d2ad836afe/volumes" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.201960 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55dc8742-0786-4103-9136-b8406e4fd914" path="/var/lib/kubelet/pods/55dc8742-0786-4103-9136-b8406e4fd914/volumes" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.230268 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.715705 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.802469 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1cc7239-c264-4766-a55d-51e50cea46ba","Type":"ContainerStarted","Data":"6d73d3d5b7e4ccde43a3bdd3f460ea7cae05ade6e54ab8588c0487b74889c300"} Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.804057 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1cc7239-c264-4766-a55d-51e50cea46ba","Type":"ContainerStarted","Data":"8b2395725eac3285a5052d01a06b59426e579c8b91565454ab7253183e6d507d"} Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.804086 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1cc7239-c264-4766-a55d-51e50cea46ba","Type":"ContainerStarted","Data":"78163936a276fe2d38c40552e53bc09fe83a821851b157d2ca90cd0242d6c45c"} Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.806897 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1","Type":"ContainerStarted","Data":"3b8803fae016b4bbb7fed61935ed193019c841396979209834f7b207e71c7ea8"} Dec 09 09:09:15 crc kubenswrapper[4786]: I1209 09:09:15.839970 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.839941623 podStartE2EDuration="2.839941623s" podCreationTimestamp="2025-12-09 09:09:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:15.823467404 +0000 UTC m=+1521.707088660" watchObservedRunningTime="2025-12-09 09:09:15.839941623 +0000 UTC m=+1521.723562859" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.409061 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.525246 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-combined-ca-bundle\") pod \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.525551 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wx58z\" (UniqueName: \"kubernetes.io/projected/ee683733-fa41-4b9a-a2f8-0a96d13d489a-kube-api-access-wx58z\") pod \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.525617 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-config-data\") pod \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\" (UID: \"ee683733-fa41-4b9a-a2f8-0a96d13d489a\") " Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.538700 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee683733-fa41-4b9a-a2f8-0a96d13d489a-kube-api-access-wx58z" (OuterVolumeSpecName: "kube-api-access-wx58z") pod "ee683733-fa41-4b9a-a2f8-0a96d13d489a" (UID: "ee683733-fa41-4b9a-a2f8-0a96d13d489a"). InnerVolumeSpecName "kube-api-access-wx58z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.561761 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ee683733-fa41-4b9a-a2f8-0a96d13d489a" (UID: "ee683733-fa41-4b9a-a2f8-0a96d13d489a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.588599 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-config-data" (OuterVolumeSpecName: "config-data") pod "ee683733-fa41-4b9a-a2f8-0a96d13d489a" (UID: "ee683733-fa41-4b9a-a2f8-0a96d13d489a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.629600 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.629641 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wx58z\" (UniqueName: \"kubernetes.io/projected/ee683733-fa41-4b9a-a2f8-0a96d13d489a-kube-api-access-wx58z\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.629655 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ee683733-fa41-4b9a-a2f8-0a96d13d489a-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.828265 4786 generic.go:334] "Generic (PLEG): container finished" podID="ee683733-fa41-4b9a-a2f8-0a96d13d489a" containerID="d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038" exitCode=0 Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.828372 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.828388 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee683733-fa41-4b9a-a2f8-0a96d13d489a","Type":"ContainerDied","Data":"d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038"} Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.828465 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ee683733-fa41-4b9a-a2f8-0a96d13d489a","Type":"ContainerDied","Data":"733eb3aaed93c36bfaff9c9d55aaca08a97c8ac5de89d06004f6f5682f9f5670"} Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.828486 4786 scope.go:117] "RemoveContainer" containerID="d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.835674 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1","Type":"ContainerStarted","Data":"65f003c95ca1a3efcc4a111dc9b48e5aab15413babdd96072923fa5bff4a1223"} Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.835731 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1","Type":"ContainerStarted","Data":"b52bfdcc3c90c78d023f89a6d18a5969a59f8f5502b6ae47663b4aa39d336504"} Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.855670 4786 scope.go:117] "RemoveContainer" containerID="d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038" Dec 09 09:09:16 crc kubenswrapper[4786]: E1209 09:09:16.859198 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038\": container with ID starting with d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038 not found: ID does not exist" containerID="d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.859285 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038"} err="failed to get container status \"d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038\": rpc error: code = NotFound desc = could not find container \"d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038\": container with ID starting with d507a7a638c40d693356afeb806ccce120ff2e55879f65315a33817ad9961038 not found: ID does not exist" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.862347 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.862322131 podStartE2EDuration="2.862322131s" podCreationTimestamp="2025-12-09 09:09:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:16.858847125 +0000 UTC m=+1522.742468351" watchObservedRunningTime="2025-12-09 09:09:16.862322131 +0000 UTC m=+1522.745943357" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.882712 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.892443 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.901852 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:09:16 crc kubenswrapper[4786]: E1209 09:09:16.902328 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee683733-fa41-4b9a-a2f8-0a96d13d489a" containerName="nova-scheduler-scheduler" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.902347 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee683733-fa41-4b9a-a2f8-0a96d13d489a" containerName="nova-scheduler-scheduler" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.902592 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee683733-fa41-4b9a-a2f8-0a96d13d489a" containerName="nova-scheduler-scheduler" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.903354 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.905730 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 09:09:16 crc kubenswrapper[4786]: I1209 09:09:16.915928 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.036534 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpldc\" (UniqueName: \"kubernetes.io/projected/96c51dc9-ba57-4fff-884d-05b63fb9028c-kube-api-access-lpldc\") pod \"nova-scheduler-0\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.036593 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-config-data\") pod \"nova-scheduler-0\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.036624 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.139391 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpldc\" (UniqueName: \"kubernetes.io/projected/96c51dc9-ba57-4fff-884d-05b63fb9028c-kube-api-access-lpldc\") pod \"nova-scheduler-0\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.139717 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-config-data\") pod \"nova-scheduler-0\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.139754 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.146207 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-config-data\") pod \"nova-scheduler-0\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.149215 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.157667 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpldc\" (UniqueName: \"kubernetes.io/projected/96c51dc9-ba57-4fff-884d-05b63fb9028c-kube-api-access-lpldc\") pod \"nova-scheduler-0\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.202374 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee683733-fa41-4b9a-a2f8-0a96d13d489a" path="/var/lib/kubelet/pods/ee683733-fa41-4b9a-a2f8-0a96d13d489a/volumes" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.225162 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.686185 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:09:17 crc kubenswrapper[4786]: W1209 09:09:17.695739 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96c51dc9_ba57_4fff_884d_05b63fb9028c.slice/crio-aaf7d6119e7e124c7fcb4b7c046e933137fe963f0a8f02f2f8b4479995408d10 WatchSource:0}: Error finding container aaf7d6119e7e124c7fcb4b7c046e933137fe963f0a8f02f2f8b4479995408d10: Status 404 returned error can't find the container with id aaf7d6119e7e124c7fcb4b7c046e933137fe963f0a8f02f2f8b4479995408d10 Dec 09 09:09:17 crc kubenswrapper[4786]: I1209 09:09:17.847379 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96c51dc9-ba57-4fff-884d-05b63fb9028c","Type":"ContainerStarted","Data":"aaf7d6119e7e124c7fcb4b7c046e933137fe963f0a8f02f2f8b4479995408d10"} Dec 09 09:09:18 crc kubenswrapper[4786]: I1209 09:09:18.194777 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:09:18 crc kubenswrapper[4786]: I1209 09:09:18.249925 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:09:18 crc kubenswrapper[4786]: I1209 09:09:18.440084 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lsvsn"] Dec 09 09:09:18 crc kubenswrapper[4786]: I1209 09:09:18.876344 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96c51dc9-ba57-4fff-884d-05b63fb9028c","Type":"ContainerStarted","Data":"ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f"} Dec 09 09:09:18 crc kubenswrapper[4786]: I1209 09:09:18.907277 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.907250131 podStartE2EDuration="2.907250131s" podCreationTimestamp="2025-12-09 09:09:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:18.897813226 +0000 UTC m=+1524.781434462" watchObservedRunningTime="2025-12-09 09:09:18.907250131 +0000 UTC m=+1524.790871367" Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.158384 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.158638 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="22607f78-204a-4fb8-82d6-53d3f878a984" containerName="kube-state-metrics" containerID="cri-o://e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3" gracePeriod=30 Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.259252 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.259307 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.847247 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.891954 4786 generic.go:334] "Generic (PLEG): container finished" podID="22607f78-204a-4fb8-82d6-53d3f878a984" containerID="e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3" exitCode=2 Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.893516 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.894226 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22607f78-204a-4fb8-82d6-53d3f878a984","Type":"ContainerDied","Data":"e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3"} Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.894476 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"22607f78-204a-4fb8-82d6-53d3f878a984","Type":"ContainerDied","Data":"48ddc74ac903466f2fe9d9cf4806046d8832c0410ea0c389def7ea93699b2ba2"} Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.894503 4786 scope.go:117] "RemoveContainer" containerID="e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3" Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.894794 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lsvsn" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" containerName="registry-server" containerID="cri-o://20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0" gracePeriod=2 Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.900854 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlxb5\" (UniqueName: \"kubernetes.io/projected/22607f78-204a-4fb8-82d6-53d3f878a984-kube-api-access-xlxb5\") pod \"22607f78-204a-4fb8-82d6-53d3f878a984\" (UID: \"22607f78-204a-4fb8-82d6-53d3f878a984\") " Dec 09 09:09:19 crc kubenswrapper[4786]: I1209 09:09:19.911959 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22607f78-204a-4fb8-82d6-53d3f878a984-kube-api-access-xlxb5" (OuterVolumeSpecName: "kube-api-access-xlxb5") pod "22607f78-204a-4fb8-82d6-53d3f878a984" (UID: "22607f78-204a-4fb8-82d6-53d3f878a984"). InnerVolumeSpecName "kube-api-access-xlxb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.008370 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlxb5\" (UniqueName: \"kubernetes.io/projected/22607f78-204a-4fb8-82d6-53d3f878a984-kube-api-access-xlxb5\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.034869 4786 scope.go:117] "RemoveContainer" containerID="e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3" Dec 09 09:09:20 crc kubenswrapper[4786]: E1209 09:09:20.040645 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3\": container with ID starting with e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3 not found: ID does not exist" containerID="e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.040713 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3"} err="failed to get container status \"e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3\": rpc error: code = NotFound desc = could not find container \"e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3\": container with ID starting with e367c4470435f062f9220bd83bbf97be508c50fc872341875618a322a82e7af3 not found: ID does not exist" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.235676 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.251647 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.268324 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 09:09:20 crc kubenswrapper[4786]: E1209 09:09:20.268928 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22607f78-204a-4fb8-82d6-53d3f878a984" containerName="kube-state-metrics" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.268951 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="22607f78-204a-4fb8-82d6-53d3f878a984" containerName="kube-state-metrics" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.269195 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="22607f78-204a-4fb8-82d6-53d3f878a984" containerName="kube-state-metrics" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.270105 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.277842 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.278211 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.280495 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.316746 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e176f4e3-203b-4784-be15-d5e306723d08-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.316938 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drdkf\" (UniqueName: \"kubernetes.io/projected/e176f4e3-203b-4784-be15-d5e306723d08-kube-api-access-drdkf\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.317024 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e176f4e3-203b-4784-be15-d5e306723d08-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.317085 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e176f4e3-203b-4784-be15-d5e306723d08-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.425638 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e176f4e3-203b-4784-be15-d5e306723d08-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.425700 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e176f4e3-203b-4784-be15-d5e306723d08-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.425863 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drdkf\" (UniqueName: \"kubernetes.io/projected/e176f4e3-203b-4784-be15-d5e306723d08-kube-api-access-drdkf\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.425954 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e176f4e3-203b-4784-be15-d5e306723d08-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.431086 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e176f4e3-203b-4784-be15-d5e306723d08-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.432031 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e176f4e3-203b-4784-be15-d5e306723d08-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.432720 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e176f4e3-203b-4784-be15-d5e306723d08-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.449016 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drdkf\" (UniqueName: \"kubernetes.io/projected/e176f4e3-203b-4784-be15-d5e306723d08-kube-api-access-drdkf\") pod \"kube-state-metrics-0\" (UID: \"e176f4e3-203b-4784-be15-d5e306723d08\") " pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.535003 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.621624 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.631033 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-catalog-content\") pod \"bbcc743e-9b17-423c-988b-da68abf2f608\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.631144 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v77cn\" (UniqueName: \"kubernetes.io/projected/bbcc743e-9b17-423c-988b-da68abf2f608-kube-api-access-v77cn\") pod \"bbcc743e-9b17-423c-988b-da68abf2f608\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.631294 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-utilities\") pod \"bbcc743e-9b17-423c-988b-da68abf2f608\" (UID: \"bbcc743e-9b17-423c-988b-da68abf2f608\") " Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.643461 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-utilities" (OuterVolumeSpecName: "utilities") pod "bbcc743e-9b17-423c-988b-da68abf2f608" (UID: "bbcc743e-9b17-423c-988b-da68abf2f608"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.646207 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbcc743e-9b17-423c-988b-da68abf2f608-kube-api-access-v77cn" (OuterVolumeSpecName: "kube-api-access-v77cn") pod "bbcc743e-9b17-423c-988b-da68abf2f608" (UID: "bbcc743e-9b17-423c-988b-da68abf2f608"). InnerVolumeSpecName "kube-api-access-v77cn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.686702 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bbcc743e-9b17-423c-988b-da68abf2f608" (UID: "bbcc743e-9b17-423c-988b-da68abf2f608"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.735090 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.735394 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bbcc743e-9b17-423c-988b-da68abf2f608-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.735561 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v77cn\" (UniqueName: \"kubernetes.io/projected/bbcc743e-9b17-423c-988b-da68abf2f608-kube-api-access-v77cn\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.912926 4786 generic.go:334] "Generic (PLEG): container finished" podID="bbcc743e-9b17-423c-988b-da68abf2f608" containerID="20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0" exitCode=0 Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.913086 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsvsn" event={"ID":"bbcc743e-9b17-423c-988b-da68abf2f608","Type":"ContainerDied","Data":"20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0"} Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.913316 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsvsn" event={"ID":"bbcc743e-9b17-423c-988b-da68abf2f608","Type":"ContainerDied","Data":"d174b5db1927ec4e2a9ab90486131f375a2229e140a309d553ebaa9262ec76f1"} Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.913347 4786 scope.go:117] "RemoveContainer" containerID="20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.913201 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsvsn" Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.966788 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lsvsn"] Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.983995 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lsvsn"] Dec 09 09:09:20 crc kubenswrapper[4786]: I1209 09:09:20.986620 4786 scope.go:117] "RemoveContainer" containerID="7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.013123 4786 scope.go:117] "RemoveContainer" containerID="4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.035300 4786 scope.go:117] "RemoveContainer" containerID="20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0" Dec 09 09:09:21 crc kubenswrapper[4786]: E1209 09:09:21.035919 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0\": container with ID starting with 20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0 not found: ID does not exist" containerID="20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.035976 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0"} err="failed to get container status \"20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0\": rpc error: code = NotFound desc = could not find container \"20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0\": container with ID starting with 20009cd281e98d10b3a4e41b845072954a768cb887745988c1e5cf4216f6b9a0 not found: ID does not exist" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.036013 4786 scope.go:117] "RemoveContainer" containerID="7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895" Dec 09 09:09:21 crc kubenswrapper[4786]: E1209 09:09:21.036539 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895\": container with ID starting with 7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895 not found: ID does not exist" containerID="7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.036568 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895"} err="failed to get container status \"7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895\": rpc error: code = NotFound desc = could not find container \"7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895\": container with ID starting with 7eaa00221cac2b412406f1f9241e83ba017b25b7d2011623795817db4389a895 not found: ID does not exist" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.036614 4786 scope.go:117] "RemoveContainer" containerID="4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7" Dec 09 09:09:21 crc kubenswrapper[4786]: E1209 09:09:21.036943 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7\": container with ID starting with 4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7 not found: ID does not exist" containerID="4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.036984 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7"} err="failed to get container status \"4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7\": rpc error: code = NotFound desc = could not find container \"4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7\": container with ID starting with 4afac15a4afba1353177ee3f532caf85a0b740bf32c41d6b09917847ad79e3a7 not found: ID does not exist" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.114987 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 09 09:09:21 crc kubenswrapper[4786]: W1209 09:09:21.127407 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode176f4e3_203b_4784_be15_d5e306723d08.slice/crio-d8f568099bad927149cfa425c68584cafe9d304e4113eaaba9bf043a180da4ba WatchSource:0}: Error finding container d8f568099bad927149cfa425c68584cafe9d304e4113eaaba9bf043a180da4ba: Status 404 returned error can't find the container with id d8f568099bad927149cfa425c68584cafe9d304e4113eaaba9bf043a180da4ba Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.200741 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22607f78-204a-4fb8-82d6-53d3f878a984" path="/var/lib/kubelet/pods/22607f78-204a-4fb8-82d6-53d3f878a984/volumes" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.201998 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" path="/var/lib/kubelet/pods/bbcc743e-9b17-423c-988b-da68abf2f608/volumes" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.498100 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.498661 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="ceilometer-central-agent" containerID="cri-o://fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2" gracePeriod=30 Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.498810 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="proxy-httpd" containerID="cri-o://b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896" gracePeriod=30 Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.498854 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="sg-core" containerID="cri-o://c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f" gracePeriod=30 Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.498883 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="ceilometer-notification-agent" containerID="cri-o://983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53" gracePeriod=30 Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.927568 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e176f4e3-203b-4784-be15-d5e306723d08","Type":"ContainerStarted","Data":"a624e7e8a4fdc8ab7e8101a3722a8f9f8df9249a8de9d970641a9fc90ab9359f"} Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.927965 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e176f4e3-203b-4784-be15-d5e306723d08","Type":"ContainerStarted","Data":"d8f568099bad927149cfa425c68584cafe9d304e4113eaaba9bf043a180da4ba"} Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.929699 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.953462 4786 generic.go:334] "Generic (PLEG): container finished" podID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerID="b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896" exitCode=0 Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.953502 4786 generic.go:334] "Generic (PLEG): container finished" podID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerID="c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f" exitCode=2 Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.953532 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerDied","Data":"b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896"} Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.953568 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerDied","Data":"c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f"} Dec 09 09:09:21 crc kubenswrapper[4786]: I1209 09:09:21.955930 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.572444884 podStartE2EDuration="1.955885184s" podCreationTimestamp="2025-12-09 09:09:20 +0000 UTC" firstStartedPulling="2025-12-09 09:09:21.13082644 +0000 UTC m=+1527.014447666" lastFinishedPulling="2025-12-09 09:09:21.51426674 +0000 UTC m=+1527.397887966" observedRunningTime="2025-12-09 09:09:21.94930115 +0000 UTC m=+1527.832922396" watchObservedRunningTime="2025-12-09 09:09:21.955885184 +0000 UTC m=+1527.839506410" Dec 09 09:09:22 crc kubenswrapper[4786]: E1209 09:09:22.075619 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc99dffbb_825d_4a73_b225_a4660ef65ff9.slice/crio-fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2.scope\": RecentStats: unable to find data in memory cache]" Dec 09 09:09:22 crc kubenswrapper[4786]: I1209 09:09:22.225506 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 09:09:22 crc kubenswrapper[4786]: I1209 09:09:22.972906 4786 generic.go:334] "Generic (PLEG): container finished" podID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerID="fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2" exitCode=0 Dec 09 09:09:22 crc kubenswrapper[4786]: I1209 09:09:22.973182 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerDied","Data":"fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2"} Dec 09 09:09:23 crc kubenswrapper[4786]: I1209 09:09:23.985805 4786 generic.go:334] "Generic (PLEG): container finished" podID="8b2cc8bb-e766-4324-afcb-8ad0655bd96f" containerID="2f48a1c3f00dd62cee43968e44376a46b5b8dd07d15d425d49393b298cff42ab" exitCode=0 Dec 09 09:09:23 crc kubenswrapper[4786]: I1209 09:09:23.986950 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-glsgk" event={"ID":"8b2cc8bb-e766-4324-afcb-8ad0655bd96f","Type":"ContainerDied","Data":"2f48a1c3f00dd62cee43968e44376a46b5b8dd07d15d425d49393b298cff42ab"} Dec 09 09:09:24 crc kubenswrapper[4786]: I1209 09:09:24.259020 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 09:09:24 crc kubenswrapper[4786]: I1209 09:09:24.259081 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.231516 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.231940 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.274540 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.274513 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.387907 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.467172 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-combined-ca-bundle\") pod \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.467456 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-585r4\" (UniqueName: \"kubernetes.io/projected/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-kube-api-access-585r4\") pod \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.467556 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-config-data\") pod \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.467607 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-scripts\") pod \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\" (UID: \"8b2cc8bb-e766-4324-afcb-8ad0655bd96f\") " Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.474362 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-scripts" (OuterVolumeSpecName: "scripts") pod "8b2cc8bb-e766-4324-afcb-8ad0655bd96f" (UID: "8b2cc8bb-e766-4324-afcb-8ad0655bd96f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.504063 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-kube-api-access-585r4" (OuterVolumeSpecName: "kube-api-access-585r4") pod "8b2cc8bb-e766-4324-afcb-8ad0655bd96f" (UID: "8b2cc8bb-e766-4324-afcb-8ad0655bd96f"). InnerVolumeSpecName "kube-api-access-585r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.542637 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-config-data" (OuterVolumeSpecName: "config-data") pod "8b2cc8bb-e766-4324-afcb-8ad0655bd96f" (UID: "8b2cc8bb-e766-4324-afcb-8ad0655bd96f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.571299 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-585r4\" (UniqueName: \"kubernetes.io/projected/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-kube-api-access-585r4\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.571347 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.571364 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.591080 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b2cc8bb-e766-4324-afcb-8ad0655bd96f" (UID: "8b2cc8bb-e766-4324-afcb-8ad0655bd96f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:25 crc kubenswrapper[4786]: I1209 09:09:25.672938 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b2cc8bb-e766-4324-afcb-8ad0655bd96f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.076070 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-glsgk" event={"ID":"8b2cc8bb-e766-4324-afcb-8ad0655bd96f","Type":"ContainerDied","Data":"5a49ca59cb8f4f755de44f9e5693bd65d3e2cc26ee2afb8127213f0440512f31"} Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.076130 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a49ca59cb8f4f755de44f9e5693bd65d3e2cc26ee2afb8127213f0440512f31" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.076251 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-glsgk" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.138394 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 09:09:26 crc kubenswrapper[4786]: E1209 09:09:26.139465 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2cc8bb-e766-4324-afcb-8ad0655bd96f" containerName="nova-cell1-conductor-db-sync" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.139491 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2cc8bb-e766-4324-afcb-8ad0655bd96f" containerName="nova-cell1-conductor-db-sync" Dec 09 09:09:26 crc kubenswrapper[4786]: E1209 09:09:26.139519 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" containerName="registry-server" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.139527 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" containerName="registry-server" Dec 09 09:09:26 crc kubenswrapper[4786]: E1209 09:09:26.139559 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" containerName="extract-utilities" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.139567 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" containerName="extract-utilities" Dec 09 09:09:26 crc kubenswrapper[4786]: E1209 09:09:26.139592 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" containerName="extract-content" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.139601 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" containerName="extract-content" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.139886 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2cc8bb-e766-4324-afcb-8ad0655bd96f" containerName="nova-cell1-conductor-db-sync" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.139934 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbcc743e-9b17-423c-988b-da68abf2f608" containerName="registry-server" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.141134 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.151628 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.163293 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.186461 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4\") " pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.186682 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4\") " pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.186730 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzcq8\" (UniqueName: \"kubernetes.io/projected/8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4-kube-api-access-tzcq8\") pod \"nova-cell1-conductor-0\" (UID: \"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4\") " pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.289302 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4\") " pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.289389 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzcq8\" (UniqueName: \"kubernetes.io/projected/8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4-kube-api-access-tzcq8\") pod \"nova-cell1-conductor-0\" (UID: \"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4\") " pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.289597 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4\") " pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.297528 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4\") " pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.298509 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4\") " pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.313619 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.313997 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.213:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.318199 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzcq8\" (UniqueName: \"kubernetes.io/projected/8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4-kube-api-access-tzcq8\") pod \"nova-cell1-conductor-0\" (UID: \"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4\") " pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:26 crc kubenswrapper[4786]: I1209 09:09:26.470158 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:27 crc kubenswrapper[4786]: W1209 09:09:26.931626 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e4f1885_8b7d_4c02_9a83_22eb4aa3c7f4.slice/crio-2d71d9e8f24fe5a0371f1cb7a6a3e8af6684f8ee11f88c077d2b652fc4edd36e WatchSource:0}: Error finding container 2d71d9e8f24fe5a0371f1cb7a6a3e8af6684f8ee11f88c077d2b652fc4edd36e: Status 404 returned error can't find the container with id 2d71d9e8f24fe5a0371f1cb7a6a3e8af6684f8ee11f88c077d2b652fc4edd36e Dec 09 09:09:27 crc kubenswrapper[4786]: I1209 09:09:26.936224 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 09 09:09:27 crc kubenswrapper[4786]: I1209 09:09:27.089874 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4","Type":"ContainerStarted","Data":"2d71d9e8f24fe5a0371f1cb7a6a3e8af6684f8ee11f88c077d2b652fc4edd36e"} Dec 09 09:09:27 crc kubenswrapper[4786]: I1209 09:09:27.226625 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 09:09:27 crc kubenswrapper[4786]: I1209 09:09:27.273824 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.106528 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4","Type":"ContainerStarted","Data":"f58266c1430147c17071a08ecdfae8b525df861c0af61234d31d04442f39c71c"} Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.134298 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.134276567 podStartE2EDuration="2.134276567s" podCreationTimestamp="2025-12-09 09:09:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:28.124275298 +0000 UTC m=+1534.007896524" watchObservedRunningTime="2025-12-09 09:09:28.134276567 +0000 UTC m=+1534.017897793" Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.150537 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.728904 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.866918 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-combined-ca-bundle\") pod \"c99dffbb-825d-4a73-b225-a4660ef65ff9\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.867051 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-config-data\") pod \"c99dffbb-825d-4a73-b225-a4660ef65ff9\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.867105 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-log-httpd\") pod \"c99dffbb-825d-4a73-b225-a4660ef65ff9\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.867151 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-run-httpd\") pod \"c99dffbb-825d-4a73-b225-a4660ef65ff9\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.867188 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7cqr\" (UniqueName: \"kubernetes.io/projected/c99dffbb-825d-4a73-b225-a4660ef65ff9-kube-api-access-l7cqr\") pod \"c99dffbb-825d-4a73-b225-a4660ef65ff9\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.867543 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-sg-core-conf-yaml\") pod \"c99dffbb-825d-4a73-b225-a4660ef65ff9\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.867643 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-scripts\") pod \"c99dffbb-825d-4a73-b225-a4660ef65ff9\" (UID: \"c99dffbb-825d-4a73-b225-a4660ef65ff9\") " Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.903524 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "c99dffbb-825d-4a73-b225-a4660ef65ff9" (UID: "c99dffbb-825d-4a73-b225-a4660ef65ff9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.912680 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "c99dffbb-825d-4a73-b225-a4660ef65ff9" (UID: "c99dffbb-825d-4a73-b225-a4660ef65ff9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.924544 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99dffbb-825d-4a73-b225-a4660ef65ff9-kube-api-access-l7cqr" (OuterVolumeSpecName: "kube-api-access-l7cqr") pod "c99dffbb-825d-4a73-b225-a4660ef65ff9" (UID: "c99dffbb-825d-4a73-b225-a4660ef65ff9"). InnerVolumeSpecName "kube-api-access-l7cqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:28 crc kubenswrapper[4786]: I1209 09:09:28.955703 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-scripts" (OuterVolumeSpecName: "scripts") pod "c99dffbb-825d-4a73-b225-a4660ef65ff9" (UID: "c99dffbb-825d-4a73-b225-a4660ef65ff9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.025996 4786 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.026035 4786 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c99dffbb-825d-4a73-b225-a4660ef65ff9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.026050 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7cqr\" (UniqueName: \"kubernetes.io/projected/c99dffbb-825d-4a73-b225-a4660ef65ff9-kube-api-access-l7cqr\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.026063 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.062563 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "c99dffbb-825d-4a73-b225-a4660ef65ff9" (UID: "c99dffbb-825d-4a73-b225-a4660ef65ff9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.109267 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c99dffbb-825d-4a73-b225-a4660ef65ff9" (UID: "c99dffbb-825d-4a73-b225-a4660ef65ff9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.128534 4786 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.129242 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.140363 4786 generic.go:334] "Generic (PLEG): container finished" podID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerID="983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53" exitCode=0 Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.140987 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.141681 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerDied","Data":"983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53"} Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.141753 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c99dffbb-825d-4a73-b225-a4660ef65ff9","Type":"ContainerDied","Data":"54a09141972cafbe43e89f5b685845c4e33fbc44f6f45d63196a6f84303fbd1b"} Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.141776 4786 scope.go:117] "RemoveContainer" containerID="b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.142978 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.184269 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-config-data" (OuterVolumeSpecName: "config-data") pod "c99dffbb-825d-4a73-b225-a4660ef65ff9" (UID: "c99dffbb-825d-4a73-b225-a4660ef65ff9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.189683 4786 scope.go:117] "RemoveContainer" containerID="c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.218666 4786 scope.go:117] "RemoveContainer" containerID="983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.234837 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c99dffbb-825d-4a73-b225-a4660ef65ff9-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.253674 4786 scope.go:117] "RemoveContainer" containerID="fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.297600 4786 scope.go:117] "RemoveContainer" containerID="b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896" Dec 09 09:09:29 crc kubenswrapper[4786]: E1209 09:09:29.298126 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896\": container with ID starting with b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896 not found: ID does not exist" containerID="b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.298179 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896"} err="failed to get container status \"b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896\": rpc error: code = NotFound desc = could not find container \"b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896\": container with ID starting with b654d1ea251f96df7b7de68733505bea974447584231a6cec29933287c83d896 not found: ID does not exist" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.298215 4786 scope.go:117] "RemoveContainer" containerID="c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f" Dec 09 09:09:29 crc kubenswrapper[4786]: E1209 09:09:29.300855 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f\": container with ID starting with c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f not found: ID does not exist" containerID="c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.300891 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f"} err="failed to get container status \"c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f\": rpc error: code = NotFound desc = could not find container \"c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f\": container with ID starting with c40b3fbbcaa3b18b2a95536a8193356ac2279c03e9828d353c148308ba74184f not found: ID does not exist" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.300910 4786 scope.go:117] "RemoveContainer" containerID="983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53" Dec 09 09:09:29 crc kubenswrapper[4786]: E1209 09:09:29.303375 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53\": container with ID starting with 983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53 not found: ID does not exist" containerID="983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.303416 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53"} err="failed to get container status \"983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53\": rpc error: code = NotFound desc = could not find container \"983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53\": container with ID starting with 983ed1d12a1eb1b6afeab3acb16d7cae29778bdc595b3c7fef0c56c384cbce53 not found: ID does not exist" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.303471 4786 scope.go:117] "RemoveContainer" containerID="fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2" Dec 09 09:09:29 crc kubenswrapper[4786]: E1209 09:09:29.309553 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2\": container with ID starting with fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2 not found: ID does not exist" containerID="fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.309602 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2"} err="failed to get container status \"fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2\": rpc error: code = NotFound desc = could not find container \"fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2\": container with ID starting with fbb173b6cc82623f6bd3e8dd8bda07bed0293f209be3fddd69c5573110726be2 not found: ID does not exist" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.488682 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.512323 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.544457 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:29 crc kubenswrapper[4786]: E1209 09:09:29.545532 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="proxy-httpd" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.545559 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="proxy-httpd" Dec 09 09:09:29 crc kubenswrapper[4786]: E1209 09:09:29.545571 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="ceilometer-central-agent" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.545580 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="ceilometer-central-agent" Dec 09 09:09:29 crc kubenswrapper[4786]: E1209 09:09:29.545596 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="sg-core" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.545602 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="sg-core" Dec 09 09:09:29 crc kubenswrapper[4786]: E1209 09:09:29.545621 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="ceilometer-notification-agent" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.545630 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="ceilometer-notification-agent" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.545845 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="ceilometer-notification-agent" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.545867 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="ceilometer-central-agent" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.545884 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="sg-core" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.545899 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" containerName="proxy-httpd" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.548464 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.553151 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.553403 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.553594 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.582180 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.644227 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.644276 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-config-data\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.644297 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.644314 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmzx6\" (UniqueName: \"kubernetes.io/projected/9977d7da-e014-4be3-bf7f-0740d28c1670-kube-api-access-jmzx6\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.644362 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-run-httpd\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.644448 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-scripts\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.644475 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.644497 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-log-httpd\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.746627 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-log-httpd\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.746792 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.746825 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-config-data\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.746841 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.746859 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmzx6\" (UniqueName: \"kubernetes.io/projected/9977d7da-e014-4be3-bf7f-0740d28c1670-kube-api-access-jmzx6\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.746914 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-run-httpd\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.747178 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-log-httpd\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.747575 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-scripts\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.747597 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-run-httpd\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.747618 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.751942 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.755999 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.756281 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.756528 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-config-data\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.757882 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-scripts\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.778803 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmzx6\" (UniqueName: \"kubernetes.io/projected/9977d7da-e014-4be3-bf7f-0740d28c1670-kube-api-access-jmzx6\") pod \"ceilometer-0\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " pod="openstack/ceilometer-0" Dec 09 09:09:29 crc kubenswrapper[4786]: I1209 09:09:29.938415 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:09:30 crc kubenswrapper[4786]: I1209 09:09:30.518873 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:30 crc kubenswrapper[4786]: W1209 09:09:30.522559 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9977d7da_e014_4be3_bf7f_0740d28c1670.slice/crio-7492edcc424c78cae10e8445bcd2ce219fe933039bfbaf48414397c06ca5aa86 WatchSource:0}: Error finding container 7492edcc424c78cae10e8445bcd2ce219fe933039bfbaf48414397c06ca5aa86: Status 404 returned error can't find the container with id 7492edcc424c78cae10e8445bcd2ce219fe933039bfbaf48414397c06ca5aa86 Dec 09 09:09:30 crc kubenswrapper[4786]: I1209 09:09:30.636915 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 09 09:09:31 crc kubenswrapper[4786]: I1209 09:09:31.180675 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerStarted","Data":"f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c"} Dec 09 09:09:31 crc kubenswrapper[4786]: I1209 09:09:31.181231 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerStarted","Data":"7492edcc424c78cae10e8445bcd2ce219fe933039bfbaf48414397c06ca5aa86"} Dec 09 09:09:31 crc kubenswrapper[4786]: I1209 09:09:31.204638 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99dffbb-825d-4a73-b225-a4660ef65ff9" path="/var/lib/kubelet/pods/c99dffbb-825d-4a73-b225-a4660ef65ff9/volumes" Dec 09 09:09:32 crc kubenswrapper[4786]: I1209 09:09:32.195379 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerStarted","Data":"72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b"} Dec 09 09:09:32 crc kubenswrapper[4786]: I1209 09:09:32.195676 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerStarted","Data":"dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e"} Dec 09 09:09:34 crc kubenswrapper[4786]: I1209 09:09:34.219231 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerStarted","Data":"c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88"} Dec 09 09:09:34 crc kubenswrapper[4786]: I1209 09:09:34.220017 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 09:09:34 crc kubenswrapper[4786]: I1209 09:09:34.251052 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.845455435 podStartE2EDuration="5.251028797s" podCreationTimestamp="2025-12-09 09:09:29 +0000 UTC" firstStartedPulling="2025-12-09 09:09:30.528648511 +0000 UTC m=+1536.412269737" lastFinishedPulling="2025-12-09 09:09:32.934221863 +0000 UTC m=+1538.817843099" observedRunningTime="2025-12-09 09:09:34.241014208 +0000 UTC m=+1540.124635434" watchObservedRunningTime="2025-12-09 09:09:34.251028797 +0000 UTC m=+1540.134650033" Dec 09 09:09:34 crc kubenswrapper[4786]: I1209 09:09:34.269574 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 09:09:34 crc kubenswrapper[4786]: I1209 09:09:34.271149 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 09:09:34 crc kubenswrapper[4786]: I1209 09:09:34.276459 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.066663 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.207152 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4wzq\" (UniqueName: \"kubernetes.io/projected/3a05d765-9884-42f9-a0b1-2f475a88fd46-kube-api-access-d4wzq\") pod \"3a05d765-9884-42f9-a0b1-2f475a88fd46\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.207207 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-config-data\") pod \"3a05d765-9884-42f9-a0b1-2f475a88fd46\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.207447 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-combined-ca-bundle\") pod \"3a05d765-9884-42f9-a0b1-2f475a88fd46\" (UID: \"3a05d765-9884-42f9-a0b1-2f475a88fd46\") " Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.220858 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a05d765-9884-42f9-a0b1-2f475a88fd46-kube-api-access-d4wzq" (OuterVolumeSpecName: "kube-api-access-d4wzq") pod "3a05d765-9884-42f9-a0b1-2f475a88fd46" (UID: "3a05d765-9884-42f9-a0b1-2f475a88fd46"). InnerVolumeSpecName "kube-api-access-d4wzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.240354 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-config-data" (OuterVolumeSpecName: "config-data") pod "3a05d765-9884-42f9-a0b1-2f475a88fd46" (UID: "3a05d765-9884-42f9-a0b1-2f475a88fd46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.240884 4786 generic.go:334] "Generic (PLEG): container finished" podID="3a05d765-9884-42f9-a0b1-2f475a88fd46" containerID="263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c" exitCode=137 Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.241606 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a05d765-9884-42f9-a0b1-2f475a88fd46","Type":"ContainerDied","Data":"263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c"} Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.241656 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"3a05d765-9884-42f9-a0b1-2f475a88fd46","Type":"ContainerDied","Data":"d7aec3dc4c5599bd9441ca0eaa5eccd56a80d89ca1a8eeaa2a1cacca36c5511c"} Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.241678 4786 scope.go:117] "RemoveContainer" containerID="263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.241815 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.250409 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3a05d765-9884-42f9-a0b1-2f475a88fd46" (UID: "3a05d765-9884-42f9-a0b1-2f475a88fd46"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.252062 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.253402 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.277080 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.277219 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.310260 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4wzq\" (UniqueName: \"kubernetes.io/projected/3a05d765-9884-42f9-a0b1-2f475a88fd46-kube-api-access-d4wzq\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.310302 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.310316 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3a05d765-9884-42f9-a0b1-2f475a88fd46-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.326085 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.346793 4786 scope.go:117] "RemoveContainer" containerID="263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c" Dec 09 09:09:35 crc kubenswrapper[4786]: E1209 09:09:35.351099 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c\": container with ID starting with 263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c not found: ID does not exist" containerID="263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.351155 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c"} err="failed to get container status \"263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c\": rpc error: code = NotFound desc = could not find container \"263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c\": container with ID starting with 263c082478e16cc2453028fa3b35d5ec281963c366f5a14a628c3c0a933f878c not found: ID does not exist" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.580374 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.596870 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.611594 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 09:09:35 crc kubenswrapper[4786]: E1209 09:09:35.612085 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a05d765-9884-42f9-a0b1-2f475a88fd46" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.612107 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a05d765-9884-42f9-a0b1-2f475a88fd46" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.612350 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a05d765-9884-42f9-a0b1-2f475a88fd46" containerName="nova-cell1-novncproxy-novncproxy" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.613145 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.615609 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.616187 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.616490 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.644506 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.717887 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.717953 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.717994 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.718017 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpxp6\" (UniqueName: \"kubernetes.io/projected/349ccf59-f627-4673-84e7-215fc9d15e27-kube-api-access-vpxp6\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.718114 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.820273 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.820693 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.820769 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.820817 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vpxp6\" (UniqueName: \"kubernetes.io/projected/349ccf59-f627-4673-84e7-215fc9d15e27-kube-api-access-vpxp6\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.820928 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.826498 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.826866 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.829119 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.829454 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/349ccf59-f627-4673-84e7-215fc9d15e27-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.842498 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpxp6\" (UniqueName: \"kubernetes.io/projected/349ccf59-f627-4673-84e7-215fc9d15e27-kube-api-access-vpxp6\") pod \"nova-cell1-novncproxy-0\" (UID: \"349ccf59-f627-4673-84e7-215fc9d15e27\") " pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:35 crc kubenswrapper[4786]: I1209 09:09:35.942202 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.277224 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.378940 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.487475 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 09 09:09:36 crc kubenswrapper[4786]: W1209 09:09:36.506521 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod349ccf59_f627_4673_84e7_215fc9d15e27.slice/crio-975846e3b1563e1bc821a250cc490ac1968dfe56134e8ee6037d09d19a4a76fe WatchSource:0}: Error finding container 975846e3b1563e1bc821a250cc490ac1968dfe56134e8ee6037d09d19a4a76fe: Status 404 returned error can't find the container with id 975846e3b1563e1bc821a250cc490ac1968dfe56134e8ee6037d09d19a4a76fe Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.554792 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.611272 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-854bf756b5-vh9h2"] Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.613208 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.636347 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854bf756b5-vh9h2"] Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.747600 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brd6m\" (UniqueName: \"kubernetes.io/projected/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-kube-api-access-brd6m\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.748089 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-config\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.748304 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-nb\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.748556 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-sb\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.748638 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-swift-storage-0\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.748729 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-svc\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.852869 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-config\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.852944 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-nb\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.852990 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-sb\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.853020 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-swift-storage-0\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.853052 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-svc\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.853107 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brd6m\" (UniqueName: \"kubernetes.io/projected/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-kube-api-access-brd6m\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.854235 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-config\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.854837 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-swift-storage-0\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.854845 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-sb\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.855396 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-svc\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.855575 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-nb\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.877689 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brd6m\" (UniqueName: \"kubernetes.io/projected/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-kube-api-access-brd6m\") pod \"dnsmasq-dns-854bf756b5-vh9h2\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:36 crc kubenswrapper[4786]: I1209 09:09:36.963002 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:37 crc kubenswrapper[4786]: I1209 09:09:37.204466 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a05d765-9884-42f9-a0b1-2f475a88fd46" path="/var/lib/kubelet/pods/3a05d765-9884-42f9-a0b1-2f475a88fd46/volumes" Dec 09 09:09:37 crc kubenswrapper[4786]: I1209 09:09:37.297746 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"349ccf59-f627-4673-84e7-215fc9d15e27","Type":"ContainerStarted","Data":"3bb1f99f74a0a2e6d538b0c75e28f38a48f51d0755db17cea026ca46e3263498"} Dec 09 09:09:37 crc kubenswrapper[4786]: I1209 09:09:37.297793 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"349ccf59-f627-4673-84e7-215fc9d15e27","Type":"ContainerStarted","Data":"975846e3b1563e1bc821a250cc490ac1968dfe56134e8ee6037d09d19a4a76fe"} Dec 09 09:09:37 crc kubenswrapper[4786]: I1209 09:09:37.323942 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.323922604 podStartE2EDuration="2.323922604s" podCreationTimestamp="2025-12-09 09:09:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:37.31171256 +0000 UTC m=+1543.195333786" watchObservedRunningTime="2025-12-09 09:09:37.323922604 +0000 UTC m=+1543.207543830" Dec 09 09:09:37 crc kubenswrapper[4786]: I1209 09:09:37.476835 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-854bf756b5-vh9h2"] Dec 09 09:09:38 crc kubenswrapper[4786]: I1209 09:09:38.310219 4786 generic.go:334] "Generic (PLEG): container finished" podID="013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" containerID="4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5" exitCode=0 Dec 09 09:09:38 crc kubenswrapper[4786]: I1209 09:09:38.311913 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" event={"ID":"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc","Type":"ContainerDied","Data":"4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5"} Dec 09 09:09:38 crc kubenswrapper[4786]: I1209 09:09:38.311954 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" event={"ID":"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc","Type":"ContainerStarted","Data":"620f21b0a3a62127b565c4485547872d2456b4c220855d3add5a88b2093d1d73"} Dec 09 09:09:39 crc kubenswrapper[4786]: I1209 09:09:39.323412 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" event={"ID":"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc","Type":"ContainerStarted","Data":"bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1"} Dec 09 09:09:39 crc kubenswrapper[4786]: I1209 09:09:39.324008 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:39 crc kubenswrapper[4786]: I1209 09:09:39.360189 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" podStartSLOduration=3.360160327 podStartE2EDuration="3.360160327s" podCreationTimestamp="2025-12-09 09:09:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:39.348536149 +0000 UTC m=+1545.232157375" watchObservedRunningTime="2025-12-09 09:09:39.360160327 +0000 UTC m=+1545.243781563" Dec 09 09:09:39 crc kubenswrapper[4786]: I1209 09:09:39.469072 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:39 crc kubenswrapper[4786]: I1209 09:09:39.469383 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-log" containerID="cri-o://b52bfdcc3c90c78d023f89a6d18a5969a59f8f5502b6ae47663b4aa39d336504" gracePeriod=30 Dec 09 09:09:39 crc kubenswrapper[4786]: I1209 09:09:39.469542 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-api" containerID="cri-o://65f003c95ca1a3efcc4a111dc9b48e5aab15413babdd96072923fa5bff4a1223" gracePeriod=30 Dec 09 09:09:40 crc kubenswrapper[4786]: I1209 09:09:40.351046 4786 generic.go:334] "Generic (PLEG): container finished" podID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerID="b52bfdcc3c90c78d023f89a6d18a5969a59f8f5502b6ae47663b4aa39d336504" exitCode=143 Dec 09 09:09:40 crc kubenswrapper[4786]: I1209 09:09:40.351117 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1","Type":"ContainerDied","Data":"b52bfdcc3c90c78d023f89a6d18a5969a59f8f5502b6ae47663b4aa39d336504"} Dec 09 09:09:40 crc kubenswrapper[4786]: I1209 09:09:40.942801 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.079605 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.079957 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="ceilometer-notification-agent" containerID="cri-o://dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e" gracePeriod=30 Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.079972 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="sg-core" containerID="cri-o://72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b" gracePeriod=30 Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.080066 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="proxy-httpd" containerID="cri-o://c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88" gracePeriod=30 Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.080120 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="ceilometer-central-agent" containerID="cri-o://f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c" gracePeriod=30 Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.445153 4786 generic.go:334] "Generic (PLEG): container finished" podID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerID="c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88" exitCode=0 Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.445407 4786 generic.go:334] "Generic (PLEG): container finished" podID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerID="72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b" exitCode=2 Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.445500 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerDied","Data":"c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88"} Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.445531 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerDied","Data":"72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b"} Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.491099 4786 generic.go:334] "Generic (PLEG): container finished" podID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerID="65f003c95ca1a3efcc4a111dc9b48e5aab15413babdd96072923fa5bff4a1223" exitCode=0 Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.491145 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1","Type":"ContainerDied","Data":"65f003c95ca1a3efcc4a111dc9b48e5aab15413babdd96072923fa5bff4a1223"} Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.739578 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.847647 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-logs\") pod \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.848084 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-config-data\") pod \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.848506 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6kvv\" (UniqueName: \"kubernetes.io/projected/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-kube-api-access-g6kvv\") pod \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.848617 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-combined-ca-bundle\") pod \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\" (UID: \"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1\") " Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.850220 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-logs" (OuterVolumeSpecName: "logs") pod "12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" (UID: "12b8d711-d0e0-4a56-9363-42d5a5b4f4a1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.857639 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-kube-api-access-g6kvv" (OuterVolumeSpecName: "kube-api-access-g6kvv") pod "12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" (UID: "12b8d711-d0e0-4a56-9363-42d5a5b4f4a1"). InnerVolumeSpecName "kube-api-access-g6kvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.907497 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" (UID: "12b8d711-d0e0-4a56-9363-42d5a5b4f4a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.912166 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-config-data" (OuterVolumeSpecName: "config-data") pod "12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" (UID: "12b8d711-d0e0-4a56-9363-42d5a5b4f4a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.951032 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6kvv\" (UniqueName: \"kubernetes.io/projected/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-kube-api-access-g6kvv\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.951071 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.951080 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:41 crc kubenswrapper[4786]: I1209 09:09:41.951092 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:42 crc kubenswrapper[4786]: I1209 09:09:42.902892 4786 generic.go:334] "Generic (PLEG): container finished" podID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerID="f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c" exitCode=0 Dec 09 09:09:42 crc kubenswrapper[4786]: I1209 09:09:42.903220 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerDied","Data":"f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c"} Dec 09 09:09:42 crc kubenswrapper[4786]: I1209 09:09:42.911551 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"12b8d711-d0e0-4a56-9363-42d5a5b4f4a1","Type":"ContainerDied","Data":"3b8803fae016b4bbb7fed61935ed193019c841396979209834f7b207e71c7ea8"} Dec 09 09:09:42 crc kubenswrapper[4786]: I1209 09:09:42.911612 4786 scope.go:117] "RemoveContainer" containerID="65f003c95ca1a3efcc4a111dc9b48e5aab15413babdd96072923fa5bff4a1223" Dec 09 09:09:42 crc kubenswrapper[4786]: I1209 09:09:42.911831 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.067035 4786 scope.go:117] "RemoveContainer" containerID="b52bfdcc3c90c78d023f89a6d18a5969a59f8f5502b6ae47663b4aa39d336504" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.081988 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.097552 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.125606 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:43 crc kubenswrapper[4786]: E1209 09:09:43.126298 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-api" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.126320 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-api" Dec 09 09:09:43 crc kubenswrapper[4786]: E1209 09:09:43.126352 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-log" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.126360 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-log" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.126622 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-log" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.126647 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" containerName="nova-api-api" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.128157 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.131041 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.131271 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.131376 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.136541 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.187033 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5d2t\" (UniqueName: \"kubernetes.io/projected/c56631a6-e349-4f07-a138-ef8f79546a0d-kube-api-access-z5d2t\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.187226 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56631a6-e349-4f07-a138-ef8f79546a0d-logs\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.187308 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.187336 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.187362 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-public-tls-certs\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.187398 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-config-data\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.200092 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12b8d711-d0e0-4a56-9363-42d5a5b4f4a1" path="/var/lib/kubelet/pods/12b8d711-d0e0-4a56-9363-42d5a5b4f4a1/volumes" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.289124 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56631a6-e349-4f07-a138-ef8f79546a0d-logs\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.289235 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.289263 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.289283 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-public-tls-certs\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.289312 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-config-data\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.289393 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5d2t\" (UniqueName: \"kubernetes.io/projected/c56631a6-e349-4f07-a138-ef8f79546a0d-kube-api-access-z5d2t\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.289837 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56631a6-e349-4f07-a138-ef8f79546a0d-logs\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: E1209 09:09:43.295957 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12b8d711_d0e0_4a56_9363_42d5a5b4f4a1.slice/crio-3b8803fae016b4bbb7fed61935ed193019c841396979209834f7b207e71c7ea8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12b8d711_d0e0_4a56_9363_42d5a5b4f4a1.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9977d7da_e014_4be3_bf7f_0740d28c1670.slice/crio-conmon-dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e.scope\": RecentStats: unable to find data in memory cache]" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.308055 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.308086 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-public-tls-certs\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.308490 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-config-data\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.312382 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.314689 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5d2t\" (UniqueName: \"kubernetes.io/projected/c56631a6-e349-4f07-a138-ef8f79546a0d-kube-api-access-z5d2t\") pod \"nova-api-0\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.445541 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.464477 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.493728 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-sg-core-conf-yaml\") pod \"9977d7da-e014-4be3-bf7f-0740d28c1670\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.493798 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-ceilometer-tls-certs\") pod \"9977d7da-e014-4be3-bf7f-0740d28c1670\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.493866 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-combined-ca-bundle\") pod \"9977d7da-e014-4be3-bf7f-0740d28c1670\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.494064 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-scripts\") pod \"9977d7da-e014-4be3-bf7f-0740d28c1670\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.494288 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-run-httpd\") pod \"9977d7da-e014-4be3-bf7f-0740d28c1670\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.494374 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-config-data\") pod \"9977d7da-e014-4be3-bf7f-0740d28c1670\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.494438 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jmzx6\" (UniqueName: \"kubernetes.io/projected/9977d7da-e014-4be3-bf7f-0740d28c1670-kube-api-access-jmzx6\") pod \"9977d7da-e014-4be3-bf7f-0740d28c1670\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.494596 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-log-httpd\") pod \"9977d7da-e014-4be3-bf7f-0740d28c1670\" (UID: \"9977d7da-e014-4be3-bf7f-0740d28c1670\") " Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.496349 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9977d7da-e014-4be3-bf7f-0740d28c1670" (UID: "9977d7da-e014-4be3-bf7f-0740d28c1670"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.497231 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9977d7da-e014-4be3-bf7f-0740d28c1670" (UID: "9977d7da-e014-4be3-bf7f-0740d28c1670"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.506637 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-scripts" (OuterVolumeSpecName: "scripts") pod "9977d7da-e014-4be3-bf7f-0740d28c1670" (UID: "9977d7da-e014-4be3-bf7f-0740d28c1670"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.512125 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9977d7da-e014-4be3-bf7f-0740d28c1670-kube-api-access-jmzx6" (OuterVolumeSpecName: "kube-api-access-jmzx6") pod "9977d7da-e014-4be3-bf7f-0740d28c1670" (UID: "9977d7da-e014-4be3-bf7f-0740d28c1670"). InnerVolumeSpecName "kube-api-access-jmzx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.538521 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9977d7da-e014-4be3-bf7f-0740d28c1670" (UID: "9977d7da-e014-4be3-bf7f-0740d28c1670"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.616279 4786 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.616318 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.616330 4786 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.616347 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jmzx6\" (UniqueName: \"kubernetes.io/projected/9977d7da-e014-4be3-bf7f-0740d28c1670-kube-api-access-jmzx6\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.616414 4786 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9977d7da-e014-4be3-bf7f-0740d28c1670-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.624775 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9977d7da-e014-4be3-bf7f-0740d28c1670" (UID: "9977d7da-e014-4be3-bf7f-0740d28c1670"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.698637 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9977d7da-e014-4be3-bf7f-0740d28c1670" (UID: "9977d7da-e014-4be3-bf7f-0740d28c1670"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.725139 4786 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.725177 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.730282 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-config-data" (OuterVolumeSpecName: "config-data") pod "9977d7da-e014-4be3-bf7f-0740d28c1670" (UID: "9977d7da-e014-4be3-bf7f-0740d28c1670"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.827387 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9977d7da-e014-4be3-bf7f-0740d28c1670-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.923330 4786 generic.go:334] "Generic (PLEG): container finished" podID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerID="dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e" exitCode=0 Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.923398 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerDied","Data":"dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e"} Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.923446 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9977d7da-e014-4be3-bf7f-0740d28c1670","Type":"ContainerDied","Data":"7492edcc424c78cae10e8445bcd2ce219fe933039bfbaf48414397c06ca5aa86"} Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.923465 4786 scope.go:117] "RemoveContainer" containerID="c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.923577 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:09:43 crc kubenswrapper[4786]: I1209 09:09:43.973331 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.070872 4786 scope.go:117] "RemoveContainer" containerID="72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.085852 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:44 crc kubenswrapper[4786]: W1209 09:09:44.103060 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc56631a6_e349_4f07_a138_ef8f79546a0d.slice/crio-39757961d8773c7d99ba883102cccbebad220dd78edc033737ac69561b185af3 WatchSource:0}: Error finding container 39757961d8773c7d99ba883102cccbebad220dd78edc033737ac69561b185af3: Status 404 returned error can't find the container with id 39757961d8773c7d99ba883102cccbebad220dd78edc033737ac69561b185af3 Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.106638 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:44 crc kubenswrapper[4786]: E1209 09:09:44.107285 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="sg-core" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.107306 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="sg-core" Dec 09 09:09:44 crc kubenswrapper[4786]: E1209 09:09:44.107351 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="proxy-httpd" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.107362 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="proxy-httpd" Dec 09 09:09:44 crc kubenswrapper[4786]: E1209 09:09:44.107394 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="ceilometer-central-agent" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.107403 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="ceilometer-central-agent" Dec 09 09:09:44 crc kubenswrapper[4786]: E1209 09:09:44.107420 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="ceilometer-notification-agent" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.107448 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="ceilometer-notification-agent" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.107716 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="sg-core" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.107742 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="proxy-httpd" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.107766 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="ceilometer-central-agent" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.107794 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" containerName="ceilometer-notification-agent" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.110356 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.124879 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.126023 4786 scope.go:117] "RemoveContainer" containerID="dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.126066 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.126495 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.126718 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.145431 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.178390 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h45g7\" (UniqueName: \"kubernetes.io/projected/cffa1372-a308-4145-a2ab-8e320fc5d296-kube-api-access-h45g7\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.178516 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.178594 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffa1372-a308-4145-a2ab-8e320fc5d296-log-httpd\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.178637 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-scripts\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.178654 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.178713 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffa1372-a308-4145-a2ab-8e320fc5d296-run-httpd\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.178732 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-config-data\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.178759 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.197525 4786 scope.go:117] "RemoveContainer" containerID="f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.228677 4786 scope.go:117] "RemoveContainer" containerID="c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88" Dec 09 09:09:44 crc kubenswrapper[4786]: E1209 09:09:44.229352 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88\": container with ID starting with c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88 not found: ID does not exist" containerID="c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.229449 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88"} err="failed to get container status \"c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88\": rpc error: code = NotFound desc = could not find container \"c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88\": container with ID starting with c7806d57be37bee52324a9bac80dd509d8d4037059a0be886368008b1c20bf88 not found: ID does not exist" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.229504 4786 scope.go:117] "RemoveContainer" containerID="72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b" Dec 09 09:09:44 crc kubenswrapper[4786]: E1209 09:09:44.230783 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b\": container with ID starting with 72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b not found: ID does not exist" containerID="72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.230827 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b"} err="failed to get container status \"72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b\": rpc error: code = NotFound desc = could not find container \"72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b\": container with ID starting with 72685c6453673550f61b51fa8100c8fa1c9791b7a9ee91ac067da8d870fa648b not found: ID does not exist" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.230917 4786 scope.go:117] "RemoveContainer" containerID="dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e" Dec 09 09:09:44 crc kubenswrapper[4786]: E1209 09:09:44.231248 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e\": container with ID starting with dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e not found: ID does not exist" containerID="dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.231280 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e"} err="failed to get container status \"dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e\": rpc error: code = NotFound desc = could not find container \"dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e\": container with ID starting with dc903d6f90a0ba23a344b53c06d7a132ea6150d0ec8824341be67f21f97c9e5e not found: ID does not exist" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.231294 4786 scope.go:117] "RemoveContainer" containerID="f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c" Dec 09 09:09:44 crc kubenswrapper[4786]: E1209 09:09:44.231822 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c\": container with ID starting with f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c not found: ID does not exist" containerID="f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.231843 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c"} err="failed to get container status \"f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c\": rpc error: code = NotFound desc = could not find container \"f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c\": container with ID starting with f3e60cb064f6add2762a162367a3d972de0cdf5c1bbf08711df9170799e92e3c not found: ID does not exist" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.280698 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffa1372-a308-4145-a2ab-8e320fc5d296-log-httpd\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.281245 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffa1372-a308-4145-a2ab-8e320fc5d296-log-httpd\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.281916 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-scripts\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.281986 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.282321 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffa1372-a308-4145-a2ab-8e320fc5d296-run-httpd\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.282359 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-config-data\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.282479 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.282700 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h45g7\" (UniqueName: \"kubernetes.io/projected/cffa1372-a308-4145-a2ab-8e320fc5d296-kube-api-access-h45g7\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.282829 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.283081 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cffa1372-a308-4145-a2ab-8e320fc5d296-run-httpd\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.287841 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-scripts\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.288330 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-config-data\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.289660 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.290868 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.300026 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cffa1372-a308-4145-a2ab-8e320fc5d296-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.312082 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h45g7\" (UniqueName: \"kubernetes.io/projected/cffa1372-a308-4145-a2ab-8e320fc5d296-kube-api-access-h45g7\") pod \"ceilometer-0\" (UID: \"cffa1372-a308-4145-a2ab-8e320fc5d296\") " pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.449578 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.952110 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c56631a6-e349-4f07-a138-ef8f79546a0d","Type":"ContainerStarted","Data":"bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0"} Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.952442 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c56631a6-e349-4f07-a138-ef8f79546a0d","Type":"ContainerStarted","Data":"2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d"} Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.952456 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c56631a6-e349-4f07-a138-ef8f79546a0d","Type":"ContainerStarted","Data":"39757961d8773c7d99ba883102cccbebad220dd78edc033737ac69561b185af3"} Dec 09 09:09:44 crc kubenswrapper[4786]: I1209 09:09:44.982227 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=1.982209685 podStartE2EDuration="1.982209685s" podCreationTimestamp="2025-12-09 09:09:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:44.980684217 +0000 UTC m=+1550.864305443" watchObservedRunningTime="2025-12-09 09:09:44.982209685 +0000 UTC m=+1550.865830911" Dec 09 09:09:45 crc kubenswrapper[4786]: I1209 09:09:45.044042 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 09 09:09:45 crc kubenswrapper[4786]: I1209 09:09:45.210996 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9977d7da-e014-4be3-bf7f-0740d28c1670" path="/var/lib/kubelet/pods/9977d7da-e014-4be3-bf7f-0740d28c1670/volumes" Dec 09 09:09:45 crc kubenswrapper[4786]: I1209 09:09:45.943233 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:45 crc kubenswrapper[4786]: I1209 09:09:45.967078 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cffa1372-a308-4145-a2ab-8e320fc5d296","Type":"ContainerStarted","Data":"139b34666b4f18b5d1cf530cb317023289a05ed0cf8c0e4c0b0a4344dd44b523"} Dec 09 09:09:45 crc kubenswrapper[4786]: I1209 09:09:45.967138 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cffa1372-a308-4145-a2ab-8e320fc5d296","Type":"ContainerStarted","Data":"73685b72bf1a64e38754c6b532ea4011885b45c40d3c7495c194558eb2f3ddc2"} Dec 09 09:09:45 crc kubenswrapper[4786]: I1209 09:09:45.967150 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cffa1372-a308-4145-a2ab-8e320fc5d296","Type":"ContainerStarted","Data":"0f1346badce71611bab5918142b0df57484e941dd501abdcdf25035f597483f6"} Dec 09 09:09:45 crc kubenswrapper[4786]: I1209 09:09:45.969039 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:45 crc kubenswrapper[4786]: I1209 09:09:45.999455 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.224090 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7mx"] Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.225907 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.228753 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.228966 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.236036 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brpdp\" (UniqueName: \"kubernetes.io/projected/a600e791-e33a-41b1-b5d0-2cc262bac81d-kube-api-access-brpdp\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.236194 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-config-data\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.236350 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-scripts\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.236405 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.236648 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7mx"] Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.338258 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-scripts\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.338710 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.338847 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brpdp\" (UniqueName: \"kubernetes.io/projected/a600e791-e33a-41b1-b5d0-2cc262bac81d-kube-api-access-brpdp\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.338921 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-config-data\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.342644 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-scripts\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.342798 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.344648 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-config-data\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.359888 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brpdp\" (UniqueName: \"kubernetes.io/projected/a600e791-e33a-41b1-b5d0-2cc262bac81d-kube-api-access-brpdp\") pod \"nova-cell1-cell-mapping-4r7mx\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.549206 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:46 crc kubenswrapper[4786]: I1209 09:09:46.966314 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.005856 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cffa1372-a308-4145-a2ab-8e320fc5d296","Type":"ContainerStarted","Data":"37c3c8d24bfc5f89f3dbb79eb41f7f06b2e74e06af11988e0edad54af0a6e4f4"} Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.042469 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6845884987-mlcxv"] Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.042757 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6845884987-mlcxv" podUID="9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" containerName="dnsmasq-dns" containerID="cri-o://1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6" gracePeriod=10 Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.153624 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7mx"] Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.800494 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.922204 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-config\") pod \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.922618 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vglrg\" (UniqueName: \"kubernetes.io/projected/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-kube-api-access-vglrg\") pod \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.923197 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-nb\") pod \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.923404 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-sb\") pod \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.923649 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-svc\") pod \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.923783 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-swift-storage-0\") pod \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\" (UID: \"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2\") " Dec 09 09:09:47 crc kubenswrapper[4786]: I1209 09:09:47.928688 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-kube-api-access-vglrg" (OuterVolumeSpecName: "kube-api-access-vglrg") pod "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" (UID: "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2"). InnerVolumeSpecName "kube-api-access-vglrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.009329 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" (UID: "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.018274 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" (UID: "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.025288 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" (UID: "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.027823 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.027861 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.027873 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vglrg\" (UniqueName: \"kubernetes.io/projected/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-kube-api-access-vglrg\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.027889 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.030278 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4r7mx" event={"ID":"a600e791-e33a-41b1-b5d0-2cc262bac81d","Type":"ContainerStarted","Data":"e249edd8100860c607c11c42a039efb72d042441be7f394caea03c6f2bf32cf3"} Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.030355 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4r7mx" event={"ID":"a600e791-e33a-41b1-b5d0-2cc262bac81d","Type":"ContainerStarted","Data":"0ca39e41b40d09c7244855a517bf1a1254f4580704ea0a1b69a43a0d6383b2c9"} Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.032067 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-config" (OuterVolumeSpecName: "config") pod "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" (UID: "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.033413 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cffa1372-a308-4145-a2ab-8e320fc5d296","Type":"ContainerStarted","Data":"3819227f6d913112043760b43e0bb15ad337a11563efc1dd12f9bc772806b6e5"} Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.034643 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.037450 4786 generic.go:334] "Generic (PLEG): container finished" podID="9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" containerID="1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6" exitCode=0 Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.037524 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6845884987-mlcxv" event={"ID":"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2","Type":"ContainerDied","Data":"1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6"} Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.037567 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6845884987-mlcxv" event={"ID":"9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2","Type":"ContainerDied","Data":"2bf114f8dc49580e118c7e370a5dbaeccf0d31900ad012029380e9a6a21431cf"} Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.037584 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6845884987-mlcxv" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.037592 4786 scope.go:117] "RemoveContainer" containerID="1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.041617 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" (UID: "9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.049450 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-4r7mx" podStartSLOduration=2.049409959 podStartE2EDuration="2.049409959s" podCreationTimestamp="2025-12-09 09:09:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:09:48.047817569 +0000 UTC m=+1553.931438795" watchObservedRunningTime="2025-12-09 09:09:48.049409959 +0000 UTC m=+1553.933031185" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.083351 4786 scope.go:117] "RemoveContainer" containerID="36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.099018 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.6185217979999997 podStartE2EDuration="5.098987681s" podCreationTimestamp="2025-12-09 09:09:43 +0000 UTC" firstStartedPulling="2025-12-09 09:09:45.064829938 +0000 UTC m=+1550.948451164" lastFinishedPulling="2025-12-09 09:09:47.545295821 +0000 UTC m=+1553.428917047" observedRunningTime="2025-12-09 09:09:48.07160371 +0000 UTC m=+1553.955224956" watchObservedRunningTime="2025-12-09 09:09:48.098987681 +0000 UTC m=+1553.982608917" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.115059 4786 scope.go:117] "RemoveContainer" containerID="1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6" Dec 09 09:09:48 crc kubenswrapper[4786]: E1209 09:09:48.115599 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6\": container with ID starting with 1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6 not found: ID does not exist" containerID="1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.115630 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6"} err="failed to get container status \"1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6\": rpc error: code = NotFound desc = could not find container \"1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6\": container with ID starting with 1769624d2080f734f667a82b17f5eac0b2a0d4914db55de2ac29b958ead564a6 not found: ID does not exist" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.115656 4786 scope.go:117] "RemoveContainer" containerID="36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab" Dec 09 09:09:48 crc kubenswrapper[4786]: E1209 09:09:48.115951 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab\": container with ID starting with 36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab not found: ID does not exist" containerID="36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.115976 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab"} err="failed to get container status \"36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab\": rpc error: code = NotFound desc = could not find container \"36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab\": container with ID starting with 36405e28d9b62a117cc8500e243471c46a678993c121dfc18f83a260483cf7ab not found: ID does not exist" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.130554 4786 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.130590 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.410790 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6845884987-mlcxv"] Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.437620 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6845884987-mlcxv"] Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.993177 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6rlj8"] Dec 09 09:09:48 crc kubenswrapper[4786]: E1209 09:09:48.994042 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" containerName="init" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.994065 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" containerName="init" Dec 09 09:09:48 crc kubenswrapper[4786]: E1209 09:09:48.994087 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" containerName="dnsmasq-dns" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.994093 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" containerName="dnsmasq-dns" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.994347 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" containerName="dnsmasq-dns" Dec 09 09:09:48 crc kubenswrapper[4786]: I1209 09:09:48.996149 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.011557 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rlj8"] Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.151111 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-utilities\") pod \"certified-operators-6rlj8\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.152414 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wss4x\" (UniqueName: \"kubernetes.io/projected/464ee67d-1d53-4a13-a8d7-52c1435dad62-kube-api-access-wss4x\") pod \"certified-operators-6rlj8\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.152638 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-catalog-content\") pod \"certified-operators-6rlj8\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.203974 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2" path="/var/lib/kubelet/pods/9efaf5b0-aea7-4bbe-96e7-0ea7af9d1ac2/volumes" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.255641 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-utilities\") pod \"certified-operators-6rlj8\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.255807 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wss4x\" (UniqueName: \"kubernetes.io/projected/464ee67d-1d53-4a13-a8d7-52c1435dad62-kube-api-access-wss4x\") pod \"certified-operators-6rlj8\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.256069 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-catalog-content\") pod \"certified-operators-6rlj8\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.256525 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-catalog-content\") pod \"certified-operators-6rlj8\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.256806 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-utilities\") pod \"certified-operators-6rlj8\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.289843 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wss4x\" (UniqueName: \"kubernetes.io/projected/464ee67d-1d53-4a13-a8d7-52c1435dad62-kube-api-access-wss4x\") pod \"certified-operators-6rlj8\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.341721 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:49 crc kubenswrapper[4786]: I1209 09:09:49.877900 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6rlj8"] Dec 09 09:09:49 crc kubenswrapper[4786]: W1209 09:09:49.887268 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod464ee67d_1d53_4a13_a8d7_52c1435dad62.slice/crio-43e353d67b6ce1e0da7870e72440434ca6210212daa258ecc3519c2ea4f7e5be WatchSource:0}: Error finding container 43e353d67b6ce1e0da7870e72440434ca6210212daa258ecc3519c2ea4f7e5be: Status 404 returned error can't find the container with id 43e353d67b6ce1e0da7870e72440434ca6210212daa258ecc3519c2ea4f7e5be Dec 09 09:09:50 crc kubenswrapper[4786]: I1209 09:09:50.074537 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rlj8" event={"ID":"464ee67d-1d53-4a13-a8d7-52c1435dad62","Type":"ContainerStarted","Data":"43e353d67b6ce1e0da7870e72440434ca6210212daa258ecc3519c2ea4f7e5be"} Dec 09 09:09:51 crc kubenswrapper[4786]: I1209 09:09:51.092117 4786 generic.go:334] "Generic (PLEG): container finished" podID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerID="00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171" exitCode=0 Dec 09 09:09:51 crc kubenswrapper[4786]: I1209 09:09:51.092305 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rlj8" event={"ID":"464ee67d-1d53-4a13-a8d7-52c1435dad62","Type":"ContainerDied","Data":"00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171"} Dec 09 09:09:52 crc kubenswrapper[4786]: I1209 09:09:52.105439 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rlj8" event={"ID":"464ee67d-1d53-4a13-a8d7-52c1435dad62","Type":"ContainerStarted","Data":"e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8"} Dec 09 09:09:53 crc kubenswrapper[4786]: I1209 09:09:53.119600 4786 generic.go:334] "Generic (PLEG): container finished" podID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerID="e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8" exitCode=0 Dec 09 09:09:53 crc kubenswrapper[4786]: I1209 09:09:53.119676 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rlj8" event={"ID":"464ee67d-1d53-4a13-a8d7-52c1435dad62","Type":"ContainerDied","Data":"e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8"} Dec 09 09:09:53 crc kubenswrapper[4786]: I1209 09:09:53.467675 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 09:09:53 crc kubenswrapper[4786]: I1209 09:09:53.468769 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 09:09:54 crc kubenswrapper[4786]: I1209 09:09:54.140576 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rlj8" event={"ID":"464ee67d-1d53-4a13-a8d7-52c1435dad62","Type":"ContainerStarted","Data":"a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b"} Dec 09 09:09:54 crc kubenswrapper[4786]: I1209 09:09:54.174602 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6rlj8" podStartSLOduration=3.751989154 podStartE2EDuration="6.17456862s" podCreationTimestamp="2025-12-09 09:09:48 +0000 UTC" firstStartedPulling="2025-12-09 09:09:51.094640748 +0000 UTC m=+1556.978262184" lastFinishedPulling="2025-12-09 09:09:53.517220414 +0000 UTC m=+1559.400841650" observedRunningTime="2025-12-09 09:09:54.164341566 +0000 UTC m=+1560.047962792" watchObservedRunningTime="2025-12-09 09:09:54.17456862 +0000 UTC m=+1560.058189856" Dec 09 09:09:54 crc kubenswrapper[4786]: I1209 09:09:54.481175 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 09:09:54 crc kubenswrapper[4786]: I1209 09:09:54.482020 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.220:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 09:09:55 crc kubenswrapper[4786]: I1209 09:09:55.154415 4786 generic.go:334] "Generic (PLEG): container finished" podID="a600e791-e33a-41b1-b5d0-2cc262bac81d" containerID="e249edd8100860c607c11c42a039efb72d042441be7f394caea03c6f2bf32cf3" exitCode=0 Dec 09 09:09:55 crc kubenswrapper[4786]: I1209 09:09:55.154482 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4r7mx" event={"ID":"a600e791-e33a-41b1-b5d0-2cc262bac81d","Type":"ContainerDied","Data":"e249edd8100860c607c11c42a039efb72d042441be7f394caea03c6f2bf32cf3"} Dec 09 09:09:56 crc kubenswrapper[4786]: I1209 09:09:56.685833 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:56 crc kubenswrapper[4786]: I1209 09:09:56.974773 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-scripts\") pod \"a600e791-e33a-41b1-b5d0-2cc262bac81d\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " Dec 09 09:09:56 crc kubenswrapper[4786]: I1209 09:09:56.975146 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-combined-ca-bundle\") pod \"a600e791-e33a-41b1-b5d0-2cc262bac81d\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " Dec 09 09:09:56 crc kubenswrapper[4786]: I1209 09:09:56.975188 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brpdp\" (UniqueName: \"kubernetes.io/projected/a600e791-e33a-41b1-b5d0-2cc262bac81d-kube-api-access-brpdp\") pod \"a600e791-e33a-41b1-b5d0-2cc262bac81d\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " Dec 09 09:09:56 crc kubenswrapper[4786]: I1209 09:09:56.975232 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-config-data\") pod \"a600e791-e33a-41b1-b5d0-2cc262bac81d\" (UID: \"a600e791-e33a-41b1-b5d0-2cc262bac81d\") " Dec 09 09:09:56 crc kubenswrapper[4786]: I1209 09:09:56.994063 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a600e791-e33a-41b1-b5d0-2cc262bac81d-kube-api-access-brpdp" (OuterVolumeSpecName: "kube-api-access-brpdp") pod "a600e791-e33a-41b1-b5d0-2cc262bac81d" (UID: "a600e791-e33a-41b1-b5d0-2cc262bac81d"). InnerVolumeSpecName "kube-api-access-brpdp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.008037 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-scripts" (OuterVolumeSpecName: "scripts") pod "a600e791-e33a-41b1-b5d0-2cc262bac81d" (UID: "a600e791-e33a-41b1-b5d0-2cc262bac81d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.028204 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a600e791-e33a-41b1-b5d0-2cc262bac81d" (UID: "a600e791-e33a-41b1-b5d0-2cc262bac81d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.064323 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-config-data" (OuterVolumeSpecName: "config-data") pod "a600e791-e33a-41b1-b5d0-2cc262bac81d" (UID: "a600e791-e33a-41b1-b5d0-2cc262bac81d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.086059 4786 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-scripts\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.086122 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.086154 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brpdp\" (UniqueName: \"kubernetes.io/projected/a600e791-e33a-41b1-b5d0-2cc262bac81d-kube-api-access-brpdp\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.086167 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a600e791-e33a-41b1-b5d0-2cc262bac81d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.182179 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-4r7mx" event={"ID":"a600e791-e33a-41b1-b5d0-2cc262bac81d","Type":"ContainerDied","Data":"0ca39e41b40d09c7244855a517bf1a1254f4580704ea0a1b69a43a0d6383b2c9"} Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.182228 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ca39e41b40d09c7244855a517bf1a1254f4580704ea0a1b69a43a0d6383b2c9" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.182297 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-4r7mx" Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.425888 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.426988 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-log" containerID="cri-o://2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d" gracePeriod=30 Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.427090 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-api" containerID="cri-o://bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0" gracePeriod=30 Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.460959 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.461412 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="96c51dc9-ba57-4fff-884d-05b63fb9028c" containerName="nova-scheduler-scheduler" containerID="cri-o://ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f" gracePeriod=30 Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.476630 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.477078 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-log" containerID="cri-o://8b2395725eac3285a5052d01a06b59426e579c8b91565454ab7253183e6d507d" gracePeriod=30 Dec 09 09:09:57 crc kubenswrapper[4786]: I1209 09:09:57.477967 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-metadata" containerID="cri-o://6d73d3d5b7e4ccde43a3bdd3f460ea7cae05ade6e54ab8588c0487b74889c300" gracePeriod=30 Dec 09 09:09:58 crc kubenswrapper[4786]: I1209 09:09:58.196675 4786 generic.go:334] "Generic (PLEG): container finished" podID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerID="8b2395725eac3285a5052d01a06b59426e579c8b91565454ab7253183e6d507d" exitCode=143 Dec 09 09:09:58 crc kubenswrapper[4786]: I1209 09:09:58.196793 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1cc7239-c264-4766-a55d-51e50cea46ba","Type":"ContainerDied","Data":"8b2395725eac3285a5052d01a06b59426e579c8b91565454ab7253183e6d507d"} Dec 09 09:09:58 crc kubenswrapper[4786]: I1209 09:09:58.199638 4786 generic.go:334] "Generic (PLEG): container finished" podID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerID="2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d" exitCode=143 Dec 09 09:09:58 crc kubenswrapper[4786]: I1209 09:09:58.199678 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c56631a6-e349-4f07-a138-ef8f79546a0d","Type":"ContainerDied","Data":"2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d"} Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.226012 4786 generic.go:334] "Generic (PLEG): container finished" podID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerID="6d73d3d5b7e4ccde43a3bdd3f460ea7cae05ade6e54ab8588c0487b74889c300" exitCode=0 Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.226108 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1cc7239-c264-4766-a55d-51e50cea46ba","Type":"ContainerDied","Data":"6d73d3d5b7e4ccde43a3bdd3f460ea7cae05ade6e54ab8588c0487b74889c300"} Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.259016 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": dial tcp 10.217.0.212:8775: connect: connection refused" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.259302 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.212:8775/\": dial tcp 10.217.0.212:8775: connect: connection refused" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.342333 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.342403 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.402143 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.684961 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.872577 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-config-data\") pod \"b1cc7239-c264-4766-a55d-51e50cea46ba\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.872720 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dd88\" (UniqueName: \"kubernetes.io/projected/b1cc7239-c264-4766-a55d-51e50cea46ba-kube-api-access-5dd88\") pod \"b1cc7239-c264-4766-a55d-51e50cea46ba\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.872812 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-nova-metadata-tls-certs\") pod \"b1cc7239-c264-4766-a55d-51e50cea46ba\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.872886 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1cc7239-c264-4766-a55d-51e50cea46ba-logs\") pod \"b1cc7239-c264-4766-a55d-51e50cea46ba\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.873100 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-combined-ca-bundle\") pod \"b1cc7239-c264-4766-a55d-51e50cea46ba\" (UID: \"b1cc7239-c264-4766-a55d-51e50cea46ba\") " Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.881771 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1cc7239-c264-4766-a55d-51e50cea46ba-logs" (OuterVolumeSpecName: "logs") pod "b1cc7239-c264-4766-a55d-51e50cea46ba" (UID: "b1cc7239-c264-4766-a55d-51e50cea46ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.892053 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1cc7239-c264-4766-a55d-51e50cea46ba-kube-api-access-5dd88" (OuterVolumeSpecName: "kube-api-access-5dd88") pod "b1cc7239-c264-4766-a55d-51e50cea46ba" (UID: "b1cc7239-c264-4766-a55d-51e50cea46ba"). InnerVolumeSpecName "kube-api-access-5dd88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.899601 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.923864 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-config-data" (OuterVolumeSpecName: "config-data") pod "b1cc7239-c264-4766-a55d-51e50cea46ba" (UID: "b1cc7239-c264-4766-a55d-51e50cea46ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.936907 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b1cc7239-c264-4766-a55d-51e50cea46ba" (UID: "b1cc7239-c264-4766-a55d-51e50cea46ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.966319 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b1cc7239-c264-4766-a55d-51e50cea46ba" (UID: "b1cc7239-c264-4766-a55d-51e50cea46ba"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.976397 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.976452 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dd88\" (UniqueName: \"kubernetes.io/projected/b1cc7239-c264-4766-a55d-51e50cea46ba-kube-api-access-5dd88\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.976470 4786 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.976483 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b1cc7239-c264-4766-a55d-51e50cea46ba-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:09:59 crc kubenswrapper[4786]: I1209 09:09:59.976498 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1cc7239-c264-4766-a55d-51e50cea46ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.080431 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-public-tls-certs\") pod \"c56631a6-e349-4f07-a138-ef8f79546a0d\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.080528 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56631a6-e349-4f07-a138-ef8f79546a0d-logs\") pod \"c56631a6-e349-4f07-a138-ef8f79546a0d\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.080738 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5d2t\" (UniqueName: \"kubernetes.io/projected/c56631a6-e349-4f07-a138-ef8f79546a0d-kube-api-access-z5d2t\") pod \"c56631a6-e349-4f07-a138-ef8f79546a0d\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.080771 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-combined-ca-bundle\") pod \"c56631a6-e349-4f07-a138-ef8f79546a0d\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.080808 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-internal-tls-certs\") pod \"c56631a6-e349-4f07-a138-ef8f79546a0d\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.080913 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-config-data\") pod \"c56631a6-e349-4f07-a138-ef8f79546a0d\" (UID: \"c56631a6-e349-4f07-a138-ef8f79546a0d\") " Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.081877 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c56631a6-e349-4f07-a138-ef8f79546a0d-logs" (OuterVolumeSpecName: "logs") pod "c56631a6-e349-4f07-a138-ef8f79546a0d" (UID: "c56631a6-e349-4f07-a138-ef8f79546a0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.092678 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c56631a6-e349-4f07-a138-ef8f79546a0d-kube-api-access-z5d2t" (OuterVolumeSpecName: "kube-api-access-z5d2t") pod "c56631a6-e349-4f07-a138-ef8f79546a0d" (UID: "c56631a6-e349-4f07-a138-ef8f79546a0d"). InnerVolumeSpecName "kube-api-access-z5d2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.182731 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-config-data" (OuterVolumeSpecName: "config-data") pod "c56631a6-e349-4f07-a138-ef8f79546a0d" (UID: "c56631a6-e349-4f07-a138-ef8f79546a0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.186583 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.186623 4786 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c56631a6-e349-4f07-a138-ef8f79546a0d-logs\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.186649 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5d2t\" (UniqueName: \"kubernetes.io/projected/c56631a6-e349-4f07-a138-ef8f79546a0d-kube-api-access-z5d2t\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.238657 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c56631a6-e349-4f07-a138-ef8f79546a0d" (UID: "c56631a6-e349-4f07-a138-ef8f79546a0d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.265927 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c56631a6-e349-4f07-a138-ef8f79546a0d" (UID: "c56631a6-e349-4f07-a138-ef8f79546a0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.277314 4786 generic.go:334] "Generic (PLEG): container finished" podID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerID="bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0" exitCode=0 Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.277438 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c56631a6-e349-4f07-a138-ef8f79546a0d","Type":"ContainerDied","Data":"bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0"} Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.277484 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c56631a6-e349-4f07-a138-ef8f79546a0d","Type":"ContainerDied","Data":"39757961d8773c7d99ba883102cccbebad220dd78edc033737ac69561b185af3"} Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.277507 4786 scope.go:117] "RemoveContainer" containerID="bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.277696 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.290851 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.290896 4786 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.295676 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c56631a6-e349-4f07-a138-ef8f79546a0d" (UID: "c56631a6-e349-4f07-a138-ef8f79546a0d"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.295834 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b1cc7239-c264-4766-a55d-51e50cea46ba","Type":"ContainerDied","Data":"78163936a276fe2d38c40552e53bc09fe83a821851b157d2ca90cd0242d6c45c"} Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.295918 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.394173 4786 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c56631a6-e349-4f07-a138-ef8f79546a0d-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.401234 4786 scope.go:117] "RemoveContainer" containerID="2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.401510 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.437744 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.454141 4786 scope.go:117] "RemoveContainer" containerID="bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0" Dec 09 09:10:00 crc kubenswrapper[4786]: E1209 09:10:00.455155 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0\": container with ID starting with bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0 not found: ID does not exist" containerID="bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.455209 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0"} err="failed to get container status \"bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0\": rpc error: code = NotFound desc = could not find container \"bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0\": container with ID starting with bf77098224efc7a826d04efa7fc97722822055e9acf9f69fb079bfbb6fd649b0 not found: ID does not exist" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.455235 4786 scope.go:117] "RemoveContainer" containerID="2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d" Dec 09 09:10:00 crc kubenswrapper[4786]: E1209 09:10:00.456612 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d\": container with ID starting with 2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d not found: ID does not exist" containerID="2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.456689 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d"} err="failed to get container status \"2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d\": rpc error: code = NotFound desc = could not find container \"2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d\": container with ID starting with 2e9cbb9c32799cef39f90e29bcf7d4403f87a02e31244d6f4a61b8b711125a2d not found: ID does not exist" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.456731 4786 scope.go:117] "RemoveContainer" containerID="6d73d3d5b7e4ccde43a3bdd3f460ea7cae05ade6e54ab8588c0487b74889c300" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.468821 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.489621 4786 scope.go:117] "RemoveContainer" containerID="8b2395725eac3285a5052d01a06b59426e579c8b91565454ab7253183e6d507d" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.496281 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:10:00 crc kubenswrapper[4786]: E1209 09:10:00.497056 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a600e791-e33a-41b1-b5d0-2cc262bac81d" containerName="nova-manage" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497099 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a600e791-e33a-41b1-b5d0-2cc262bac81d" containerName="nova-manage" Dec 09 09:10:00 crc kubenswrapper[4786]: E1209 09:10:00.497123 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-api" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497131 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-api" Dec 09 09:10:00 crc kubenswrapper[4786]: E1209 09:10:00.497169 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-metadata" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497179 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-metadata" Dec 09 09:10:00 crc kubenswrapper[4786]: E1209 09:10:00.497216 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-log" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497226 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-log" Dec 09 09:10:00 crc kubenswrapper[4786]: E1209 09:10:00.497281 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-log" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497290 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-log" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497566 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-metadata" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497595 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a600e791-e33a-41b1-b5d0-2cc262bac81d" containerName="nova-manage" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497622 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-log" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497635 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" containerName="nova-api-api" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.497644 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" containerName="nova-metadata-log" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.499382 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.503660 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.503908 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.536203 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.552888 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rlj8"] Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.606831 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c44e4-7244-46f8-983b-de6cd923bd74-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.609204 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a5c44e4-7244-46f8-983b-de6cd923bd74-config-data\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.609258 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a5c44e4-7244-46f8-983b-de6cd923bd74-logs\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.610038 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89h68\" (UniqueName: \"kubernetes.io/projected/8a5c44e4-7244-46f8-983b-de6cd923bd74-kube-api-access-89h68\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.610146 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5c44e4-7244-46f8-983b-de6cd923bd74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.636089 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.646825 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.661901 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.664091 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.670745 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.671212 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.671360 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.692989 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.715362 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89h68\" (UniqueName: \"kubernetes.io/projected/8a5c44e4-7244-46f8-983b-de6cd923bd74-kube-api-access-89h68\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.715425 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5c44e4-7244-46f8-983b-de6cd923bd74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.715562 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c44e4-7244-46f8-983b-de6cd923bd74-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.715632 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a5c44e4-7244-46f8-983b-de6cd923bd74-config-data\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.715664 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a5c44e4-7244-46f8-983b-de6cd923bd74-logs\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.716383 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8a5c44e4-7244-46f8-983b-de6cd923bd74-logs\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.723175 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a5c44e4-7244-46f8-983b-de6cd923bd74-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.724201 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/8a5c44e4-7244-46f8-983b-de6cd923bd74-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.734547 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89h68\" (UniqueName: \"kubernetes.io/projected/8a5c44e4-7244-46f8-983b-de6cd923bd74-kube-api-access-89h68\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.743370 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a5c44e4-7244-46f8-983b-de6cd923bd74-config-data\") pod \"nova-metadata-0\" (UID: \"8a5c44e4-7244-46f8-983b-de6cd923bd74\") " pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.817514 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-config-data\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.817964 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q768\" (UniqueName: \"kubernetes.io/projected/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-kube-api-access-2q768\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.817986 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-logs\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.818024 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.818091 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.818134 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-public-tls-certs\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.832130 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.921273 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-config-data\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.921434 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q768\" (UniqueName: \"kubernetes.io/projected/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-kube-api-access-2q768\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.921478 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-logs\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.921525 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.921581 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.921623 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-public-tls-certs\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.923320 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-logs\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.927363 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-public-tls-certs\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.927985 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-config-data\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.928007 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-internal-tls-certs\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.928572 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.941656 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q768\" (UniqueName: \"kubernetes.io/projected/052f7fa7-4f28-421a-a1a4-d262f5d8c2de-kube-api-access-2q768\") pod \"nova-api-0\" (UID: \"052f7fa7-4f28-421a-a1a4-d262f5d8c2de\") " pod="openstack/nova-api-0" Dec 09 09:10:00 crc kubenswrapper[4786]: I1209 09:10:00.985088 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 09 09:10:01 crc kubenswrapper[4786]: I1209 09:10:01.200423 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1cc7239-c264-4766-a55d-51e50cea46ba" path="/var/lib/kubelet/pods/b1cc7239-c264-4766-a55d-51e50cea46ba/volumes" Dec 09 09:10:01 crc kubenswrapper[4786]: I1209 09:10:01.201759 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c56631a6-e349-4f07-a138-ef8f79546a0d" path="/var/lib/kubelet/pods/c56631a6-e349-4f07-a138-ef8f79546a0d/volumes" Dec 09 09:10:01 crc kubenswrapper[4786]: I1209 09:10:01.368399 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 09 09:10:01 crc kubenswrapper[4786]: I1209 09:10:01.562892 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 09 09:10:01 crc kubenswrapper[4786]: W1209 09:10:01.577780 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod052f7fa7_4f28_421a_a1a4_d262f5d8c2de.slice/crio-6183b3d2e40aca6c30f363d807d5bac53414d5b68eafe7b457484d949a6a02c0 WatchSource:0}: Error finding container 6183b3d2e40aca6c30f363d807d5bac53414d5b68eafe7b457484d949a6a02c0: Status 404 returned error can't find the container with id 6183b3d2e40aca6c30f363d807d5bac53414d5b68eafe7b457484d949a6a02c0 Dec 09 09:10:02 crc kubenswrapper[4786]: E1209 09:10:02.227860 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 09:10:02 crc kubenswrapper[4786]: E1209 09:10:02.229905 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 09:10:02 crc kubenswrapper[4786]: E1209 09:10:02.231217 4786 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 09 09:10:02 crc kubenswrapper[4786]: E1209 09:10:02.231313 4786 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="96c51dc9-ba57-4fff-884d-05b63fb9028c" containerName="nova-scheduler-scheduler" Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.322571 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8a5c44e4-7244-46f8-983b-de6cd923bd74","Type":"ContainerStarted","Data":"70af70d4d129b8ffb751f030b8e1c2a8599e838aad47ad57757c890cfcf2421b"} Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.322629 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8a5c44e4-7244-46f8-983b-de6cd923bd74","Type":"ContainerStarted","Data":"e5845462c6bae226fb3d52ba5716aeb68f6f392932a305e6a67f05bb763c9cd3"} Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.322644 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"8a5c44e4-7244-46f8-983b-de6cd923bd74","Type":"ContainerStarted","Data":"04806b8e81b7b1b5144949a2df1af27cfca7dfe0710519757eb42317bec95858"} Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.324967 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"052f7fa7-4f28-421a-a1a4-d262f5d8c2de","Type":"ContainerStarted","Data":"c06aa52feb872696a950caa217afe715f02666c39a677355b9d90709c7284f10"} Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.325014 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"052f7fa7-4f28-421a-a1a4-d262f5d8c2de","Type":"ContainerStarted","Data":"79d89c2d21727e21dde88445e87751475f9b9fb685b7b42772afe3e1167afe3f"} Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.325027 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"052f7fa7-4f28-421a-a1a4-d262f5d8c2de","Type":"ContainerStarted","Data":"6183b3d2e40aca6c30f363d807d5bac53414d5b68eafe7b457484d949a6a02c0"} Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.325093 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6rlj8" podUID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerName="registry-server" containerID="cri-o://a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b" gracePeriod=2 Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.357499 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.357424586 podStartE2EDuration="2.357424586s" podCreationTimestamp="2025-12-09 09:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:10:02.350104654 +0000 UTC m=+1568.233725880" watchObservedRunningTime="2025-12-09 09:10:02.357424586 +0000 UTC m=+1568.241045812" Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.408502 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.408478225 podStartE2EDuration="2.408478225s" podCreationTimestamp="2025-12-09 09:10:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:10:02.396511227 +0000 UTC m=+1568.280132473" watchObservedRunningTime="2025-12-09 09:10:02.408478225 +0000 UTC m=+1568.292099451" Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.929379 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:10:02 crc kubenswrapper[4786]: I1209 09:10:02.936223 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.079197 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-utilities\") pod \"464ee67d-1d53-4a13-a8d7-52c1435dad62\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.079373 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-catalog-content\") pod \"464ee67d-1d53-4a13-a8d7-52c1435dad62\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.079462 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-config-data\") pod \"96c51dc9-ba57-4fff-884d-05b63fb9028c\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.079555 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wss4x\" (UniqueName: \"kubernetes.io/projected/464ee67d-1d53-4a13-a8d7-52c1435dad62-kube-api-access-wss4x\") pod \"464ee67d-1d53-4a13-a8d7-52c1435dad62\" (UID: \"464ee67d-1d53-4a13-a8d7-52c1435dad62\") " Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.079587 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-combined-ca-bundle\") pod \"96c51dc9-ba57-4fff-884d-05b63fb9028c\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.079719 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpldc\" (UniqueName: \"kubernetes.io/projected/96c51dc9-ba57-4fff-884d-05b63fb9028c-kube-api-access-lpldc\") pod \"96c51dc9-ba57-4fff-884d-05b63fb9028c\" (UID: \"96c51dc9-ba57-4fff-884d-05b63fb9028c\") " Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.080158 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-utilities" (OuterVolumeSpecName: "utilities") pod "464ee67d-1d53-4a13-a8d7-52c1435dad62" (UID: "464ee67d-1d53-4a13-a8d7-52c1435dad62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.080369 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.085289 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/464ee67d-1d53-4a13-a8d7-52c1435dad62-kube-api-access-wss4x" (OuterVolumeSpecName: "kube-api-access-wss4x") pod "464ee67d-1d53-4a13-a8d7-52c1435dad62" (UID: "464ee67d-1d53-4a13-a8d7-52c1435dad62"). InnerVolumeSpecName "kube-api-access-wss4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.085425 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c51dc9-ba57-4fff-884d-05b63fb9028c-kube-api-access-lpldc" (OuterVolumeSpecName: "kube-api-access-lpldc") pod "96c51dc9-ba57-4fff-884d-05b63fb9028c" (UID: "96c51dc9-ba57-4fff-884d-05b63fb9028c"). InnerVolumeSpecName "kube-api-access-lpldc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.112364 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-config-data" (OuterVolumeSpecName: "config-data") pod "96c51dc9-ba57-4fff-884d-05b63fb9028c" (UID: "96c51dc9-ba57-4fff-884d-05b63fb9028c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.124400 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96c51dc9-ba57-4fff-884d-05b63fb9028c" (UID: "96c51dc9-ba57-4fff-884d-05b63fb9028c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.133919 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "464ee67d-1d53-4a13-a8d7-52c1435dad62" (UID: "464ee67d-1d53-4a13-a8d7-52c1435dad62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.182277 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/464ee67d-1d53-4a13-a8d7-52c1435dad62-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.182314 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.182328 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wss4x\" (UniqueName: \"kubernetes.io/projected/464ee67d-1d53-4a13-a8d7-52c1435dad62-kube-api-access-wss4x\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.182338 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c51dc9-ba57-4fff-884d-05b63fb9028c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.182348 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpldc\" (UniqueName: \"kubernetes.io/projected/96c51dc9-ba57-4fff-884d-05b63fb9028c-kube-api-access-lpldc\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.366110 4786 generic.go:334] "Generic (PLEG): container finished" podID="96c51dc9-ba57-4fff-884d-05b63fb9028c" containerID="ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f" exitCode=0 Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.366230 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96c51dc9-ba57-4fff-884d-05b63fb9028c","Type":"ContainerDied","Data":"ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f"} Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.366294 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"96c51dc9-ba57-4fff-884d-05b63fb9028c","Type":"ContainerDied","Data":"aaf7d6119e7e124c7fcb4b7c046e933137fe963f0a8f02f2f8b4479995408d10"} Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.366319 4786 scope.go:117] "RemoveContainer" containerID="ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.366652 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.374714 4786 generic.go:334] "Generic (PLEG): container finished" podID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerID="a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b" exitCode=0 Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.374975 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rlj8" event={"ID":"464ee67d-1d53-4a13-a8d7-52c1435dad62","Type":"ContainerDied","Data":"a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b"} Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.375048 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6rlj8" event={"ID":"464ee67d-1d53-4a13-a8d7-52c1435dad62","Type":"ContainerDied","Data":"43e353d67b6ce1e0da7870e72440434ca6210212daa258ecc3519c2ea4f7e5be"} Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.375185 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6rlj8" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.401509 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.419913 4786 scope.go:117] "RemoveContainer" containerID="ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f" Dec 09 09:10:03 crc kubenswrapper[4786]: E1209 09:10:03.421076 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f\": container with ID starting with ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f not found: ID does not exist" containerID="ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.421124 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f"} err="failed to get container status \"ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f\": rpc error: code = NotFound desc = could not find container \"ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f\": container with ID starting with ddd3622ec7d0cf64be8a5c80dfc222a68daaf7a3d94cc3a392e260f186f0e14f not found: ID does not exist" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.421156 4786 scope.go:117] "RemoveContainer" containerID="a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.432067 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.443156 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6rlj8"] Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.455960 4786 scope.go:117] "RemoveContainer" containerID="e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.467095 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:10:03 crc kubenswrapper[4786]: E1209 09:10:03.467663 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c51dc9-ba57-4fff-884d-05b63fb9028c" containerName="nova-scheduler-scheduler" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.467684 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c51dc9-ba57-4fff-884d-05b63fb9028c" containerName="nova-scheduler-scheduler" Dec 09 09:10:03 crc kubenswrapper[4786]: E1209 09:10:03.467696 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerName="registry-server" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.467703 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerName="registry-server" Dec 09 09:10:03 crc kubenswrapper[4786]: E1209 09:10:03.467722 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerName="extract-content" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.467729 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerName="extract-content" Dec 09 09:10:03 crc kubenswrapper[4786]: E1209 09:10:03.467746 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerName="extract-utilities" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.467753 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerName="extract-utilities" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.467974 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="464ee67d-1d53-4a13-a8d7-52c1435dad62" containerName="registry-server" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.467988 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c51dc9-ba57-4fff-884d-05b63fb9028c" containerName="nova-scheduler-scheduler" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.468774 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.472195 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.480659 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6rlj8"] Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.490663 4786 scope.go:117] "RemoveContainer" containerID="00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.494974 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.577750 4786 scope.go:117] "RemoveContainer" containerID="a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b" Dec 09 09:10:03 crc kubenswrapper[4786]: E1209 09:10:03.580942 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b\": container with ID starting with a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b not found: ID does not exist" containerID="a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.581011 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b"} err="failed to get container status \"a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b\": rpc error: code = NotFound desc = could not find container \"a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b\": container with ID starting with a5ae22b1f4480aedce3f179d6c55bb79c8a5667c922d52ef33bee8567318257b not found: ID does not exist" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.581043 4786 scope.go:117] "RemoveContainer" containerID="e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8" Dec 09 09:10:03 crc kubenswrapper[4786]: E1209 09:10:03.581405 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8\": container with ID starting with e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8 not found: ID does not exist" containerID="e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.581462 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8"} err="failed to get container status \"e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8\": rpc error: code = NotFound desc = could not find container \"e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8\": container with ID starting with e00b4e5352a9d344c3f82710ac2388d82cb0700348ff0257e4421ea533fb16f8 not found: ID does not exist" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.581484 4786 scope.go:117] "RemoveContainer" containerID="00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171" Dec 09 09:10:03 crc kubenswrapper[4786]: E1209 09:10:03.581872 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171\": container with ID starting with 00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171 not found: ID does not exist" containerID="00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.581922 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171"} err="failed to get container status \"00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171\": rpc error: code = NotFound desc = could not find container \"00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171\": container with ID starting with 00ea23eae94cf09041f82386e44aa3f01e572f94376c492a4be24cfb55ba3171 not found: ID does not exist" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.597911 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bbgb\" (UniqueName: \"kubernetes.io/projected/3c7e4255-6f20-4911-b8d9-862fb7b801da-kube-api-access-9bbgb\") pod \"nova-scheduler-0\" (UID: \"3c7e4255-6f20-4911-b8d9-862fb7b801da\") " pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.597982 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7e4255-6f20-4911-b8d9-862fb7b801da-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3c7e4255-6f20-4911-b8d9-862fb7b801da\") " pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.598094 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7e4255-6f20-4911-b8d9-862fb7b801da-config-data\") pod \"nova-scheduler-0\" (UID: \"3c7e4255-6f20-4911-b8d9-862fb7b801da\") " pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.701889 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bbgb\" (UniqueName: \"kubernetes.io/projected/3c7e4255-6f20-4911-b8d9-862fb7b801da-kube-api-access-9bbgb\") pod \"nova-scheduler-0\" (UID: \"3c7e4255-6f20-4911-b8d9-862fb7b801da\") " pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.702258 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7e4255-6f20-4911-b8d9-862fb7b801da-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3c7e4255-6f20-4911-b8d9-862fb7b801da\") " pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.702386 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7e4255-6f20-4911-b8d9-862fb7b801da-config-data\") pod \"nova-scheduler-0\" (UID: \"3c7e4255-6f20-4911-b8d9-862fb7b801da\") " pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.720369 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c7e4255-6f20-4911-b8d9-862fb7b801da-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3c7e4255-6f20-4911-b8d9-862fb7b801da\") " pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.731175 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c7e4255-6f20-4911-b8d9-862fb7b801da-config-data\") pod \"nova-scheduler-0\" (UID: \"3c7e4255-6f20-4911-b8d9-862fb7b801da\") " pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.749529 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bbgb\" (UniqueName: \"kubernetes.io/projected/3c7e4255-6f20-4911-b8d9-862fb7b801da-kube-api-access-9bbgb\") pod \"nova-scheduler-0\" (UID: \"3c7e4255-6f20-4911-b8d9-862fb7b801da\") " pod="openstack/nova-scheduler-0" Dec 09 09:10:03 crc kubenswrapper[4786]: I1209 09:10:03.909724 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 09 09:10:04 crc kubenswrapper[4786]: W1209 09:10:04.437307 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c7e4255_6f20_4911_b8d9_862fb7b801da.slice/crio-b5383b5a543c07e378626af502c205205df3eb6b98fbbd4461a7d60406629e43 WatchSource:0}: Error finding container b5383b5a543c07e378626af502c205205df3eb6b98fbbd4461a7d60406629e43: Status 404 returned error can't find the container with id b5383b5a543c07e378626af502c205205df3eb6b98fbbd4461a7d60406629e43 Dec 09 09:10:04 crc kubenswrapper[4786]: I1209 09:10:04.438367 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 09 09:10:05 crc kubenswrapper[4786]: I1209 09:10:05.200699 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="464ee67d-1d53-4a13-a8d7-52c1435dad62" path="/var/lib/kubelet/pods/464ee67d-1d53-4a13-a8d7-52c1435dad62/volumes" Dec 09 09:10:05 crc kubenswrapper[4786]: I1209 09:10:05.201808 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c51dc9-ba57-4fff-884d-05b63fb9028c" path="/var/lib/kubelet/pods/96c51dc9-ba57-4fff-884d-05b63fb9028c/volumes" Dec 09 09:10:05 crc kubenswrapper[4786]: I1209 09:10:05.403514 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3c7e4255-6f20-4911-b8d9-862fb7b801da","Type":"ContainerStarted","Data":"3ee12f7d76d7c82cddc31ceb618d77dcd67f9d8da7f572aa2e58bd7ba46f7119"} Dec 09 09:10:05 crc kubenswrapper[4786]: I1209 09:10:05.403576 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3c7e4255-6f20-4911-b8d9-862fb7b801da","Type":"ContainerStarted","Data":"b5383b5a543c07e378626af502c205205df3eb6b98fbbd4461a7d60406629e43"} Dec 09 09:10:05 crc kubenswrapper[4786]: I1209 09:10:05.428022 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.428001135 podStartE2EDuration="2.428001135s" podCreationTimestamp="2025-12-09 09:10:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:10:05.417897944 +0000 UTC m=+1571.301519170" watchObservedRunningTime="2025-12-09 09:10:05.428001135 +0000 UTC m=+1571.311622361" Dec 09 09:10:05 crc kubenswrapper[4786]: I1209 09:10:05.832190 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 09:10:05 crc kubenswrapper[4786]: I1209 09:10:05.832259 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 09 09:10:08 crc kubenswrapper[4786]: I1209 09:10:08.910436 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 09 09:10:10 crc kubenswrapper[4786]: I1209 09:10:10.832315 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 09:10:10 crc kubenswrapper[4786]: I1209 09:10:10.834055 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 09 09:10:10 crc kubenswrapper[4786]: I1209 09:10:10.986536 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 09:10:10 crc kubenswrapper[4786]: I1209 09:10:10.986603 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 09 09:10:11 crc kubenswrapper[4786]: I1209 09:10:11.849600 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8a5c44e4-7244-46f8-983b-de6cd923bd74" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 09:10:11 crc kubenswrapper[4786]: I1209 09:10:11.849581 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="8a5c44e4-7244-46f8-983b-de6cd923bd74" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.224:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 09:10:12 crc kubenswrapper[4786]: I1209 09:10:12.005620 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="052f7fa7-4f28-421a-a1a4-d262f5d8c2de" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 09:10:12 crc kubenswrapper[4786]: I1209 09:10:12.005636 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="052f7fa7-4f28-421a-a1a4-d262f5d8c2de" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.225:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 09 09:10:13 crc kubenswrapper[4786]: I1209 09:10:13.910537 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 09 09:10:13 crc kubenswrapper[4786]: I1209 09:10:13.968117 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 09 09:10:14 crc kubenswrapper[4786]: I1209 09:10:14.466021 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 09 09:10:14 crc kubenswrapper[4786]: I1209 09:10:14.608305 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 09 09:10:20 crc kubenswrapper[4786]: I1209 09:10:20.838779 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 09:10:20 crc kubenswrapper[4786]: I1209 09:10:20.839536 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 09 09:10:20 crc kubenswrapper[4786]: I1209 09:10:20.844588 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 09:10:20 crc kubenswrapper[4786]: I1209 09:10:20.846539 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 09 09:10:20 crc kubenswrapper[4786]: I1209 09:10:20.995999 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 09:10:20 crc kubenswrapper[4786]: I1209 09:10:20.997020 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 09 09:10:20 crc kubenswrapper[4786]: I1209 09:10:20.997084 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 09:10:21 crc kubenswrapper[4786]: I1209 09:10:21.008900 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 09:10:21 crc kubenswrapper[4786]: I1209 09:10:21.646812 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 09 09:10:21 crc kubenswrapper[4786]: I1209 09:10:21.657671 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 09 09:10:30 crc kubenswrapper[4786]: I1209 09:10:30.698058 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 09:10:31 crc kubenswrapper[4786]: I1209 09:10:31.748561 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 09:10:34 crc kubenswrapper[4786]: I1209 09:10:34.920326 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerName="rabbitmq" containerID="cri-o://5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d" gracePeriod=604796 Dec 09 09:10:35 crc kubenswrapper[4786]: I1209 09:10:35.672911 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerName="rabbitmq" containerID="cri-o://4366b1674ffd020712dba1a21ba84648563b9bbd92a30327e54ce48d6a0fef93" gracePeriod=604797 Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.662145 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770296 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-erlang-cookie\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770394 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-tls\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770460 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-config-data\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770598 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-plugins-conf\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770642 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-plugins\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770688 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r7v7\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-kube-api-access-9r7v7\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770775 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-erlang-cookie-secret\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770844 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-server-conf\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770883 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-pod-info\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770912 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-confd\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.770949 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\" (UID: \"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59\") " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.772000 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.775999 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.777505 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.786853 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.809747 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.821041 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "persistence") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.840772 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-kube-api-access-9r7v7" (OuterVolumeSpecName: "kube-api-access-9r7v7") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "kube-api-access-9r7v7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.878637 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-pod-info" (OuterVolumeSpecName: "pod-info") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.880251 4786 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.880270 4786 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.880281 4786 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.880290 4786 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.880298 4786 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.880307 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r7v7\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-kube-api-access-9r7v7\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.880315 4786 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.880324 4786 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.905241 4786 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.923702 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-server-conf" (OuterVolumeSpecName: "server-conf") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:36 crc kubenswrapper[4786]: I1209 09:10:36.932982 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.016381 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-config-data" (OuterVolumeSpecName: "config-data") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.041245 4786 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.041311 4786 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.041333 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.301004 4786 generic.go:334] "Generic (PLEG): container finished" podID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerID="5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d" exitCode=0 Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.301150 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.375288 4786 generic.go:334] "Generic (PLEG): container finished" podID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerID="4366b1674ffd020712dba1a21ba84648563b9bbd92a30327e54ce48d6a0fef93" exitCode=0 Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.375801 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" (UID: "b0a91d0e-2d71-4fdc-8d68-953a12dc7f59"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.428869 4786 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.469613 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59","Type":"ContainerDied","Data":"5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d"} Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.469679 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"b0a91d0e-2d71-4fdc-8d68-953a12dc7f59","Type":"ContainerDied","Data":"7c3cb46617b32de57a9bec1cd0b29a29b7a49e2e7057e77511d7d4466d165e2d"} Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.469694 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01efacfa-e002-4e0d-aa6b-91217baa22ca","Type":"ContainerDied","Data":"4366b1674ffd020712dba1a21ba84648563b9bbd92a30327e54ce48d6a0fef93"} Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.469720 4786 scope.go:117] "RemoveContainer" containerID="5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.525357 4786 scope.go:117] "RemoveContainer" containerID="0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.589328 4786 scope.go:117] "RemoveContainer" containerID="5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d" Dec 09 09:10:37 crc kubenswrapper[4786]: E1209 09:10:37.589823 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d\": container with ID starting with 5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d not found: ID does not exist" containerID="5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.589856 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d"} err="failed to get container status \"5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d\": rpc error: code = NotFound desc = could not find container \"5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d\": container with ID starting with 5c6885e3c95dc84e7dc0bf6c439805aa49aed326abee7f1899d4b79f619ace9d not found: ID does not exist" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.589884 4786 scope.go:117] "RemoveContainer" containerID="0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0" Dec 09 09:10:37 crc kubenswrapper[4786]: E1209 09:10:37.594033 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0\": container with ID starting with 0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0 not found: ID does not exist" containerID="0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.594066 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0"} err="failed to get container status \"0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0\": rpc error: code = NotFound desc = could not find container \"0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0\": container with ID starting with 0183621f323d271a1e61e5f6fcb16986cb78a47043bbcb9cb6b8ac0a03aa11b0 not found: ID does not exist" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.669491 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.694337 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.719984 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.732865 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 09:10:37 crc kubenswrapper[4786]: E1209 09:10:37.733363 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerName="setup-container" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.733375 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerName="setup-container" Dec 09 09:10:37 crc kubenswrapper[4786]: E1209 09:10:37.733386 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerName="rabbitmq" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.733394 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerName="rabbitmq" Dec 09 09:10:37 crc kubenswrapper[4786]: E1209 09:10:37.733413 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerName="rabbitmq" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.733420 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerName="rabbitmq" Dec 09 09:10:37 crc kubenswrapper[4786]: E1209 09:10:37.733461 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerName="setup-container" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.733468 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerName="setup-container" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.733687 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerName="rabbitmq" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.733705 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" containerName="rabbitmq" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.735664 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.745806 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.746101 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-kd77g" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.752362 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.752600 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.752736 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.752960 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.753629 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.795415 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.853486 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-server-conf\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.853648 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01efacfa-e002-4e0d-aa6b-91217baa22ca-erlang-cookie-secret\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.853698 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-plugins-conf\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.853771 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-config-data\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.853895 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-tls\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.853923 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.853994 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-plugins\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.854033 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbptq\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-kube-api-access-cbptq\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.854101 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01efacfa-e002-4e0d-aa6b-91217baa22ca-pod-info\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.854149 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-erlang-cookie\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.854225 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-confd\") pod \"01efacfa-e002-4e0d-aa6b-91217baa22ca\" (UID: \"01efacfa-e002-4e0d-aa6b-91217baa22ca\") " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.854797 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.854839 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72675e09-5efb-4dc9-bc17-25b93ecf7537-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.854865 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.854897 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.854961 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72675e09-5efb-4dc9-bc17-25b93ecf7537-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.855032 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72675e09-5efb-4dc9-bc17-25b93ecf7537-server-conf\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.855076 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72675e09-5efb-4dc9-bc17-25b93ecf7537-config-data\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.855101 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpzfp\" (UniqueName: \"kubernetes.io/projected/72675e09-5efb-4dc9-bc17-25b93ecf7537-kube-api-access-hpzfp\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.855150 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72675e09-5efb-4dc9-bc17-25b93ecf7537-pod-info\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.855196 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.855259 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.861714 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.862127 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.863010 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.865710 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/01efacfa-e002-4e0d-aa6b-91217baa22ca-pod-info" (OuterVolumeSpecName: "pod-info") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.869712 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.873120 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01efacfa-e002-4e0d-aa6b-91217baa22ca-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.873795 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-kube-api-access-cbptq" (OuterVolumeSpecName: "kube-api-access-cbptq") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "kube-api-access-cbptq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.881136 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.905603 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-config-data" (OuterVolumeSpecName: "config-data") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.946047 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-server-conf" (OuterVolumeSpecName: "server-conf") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.957405 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.957683 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72675e09-5efb-4dc9-bc17-25b93ecf7537-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.957712 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.957739 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.957794 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72675e09-5efb-4dc9-bc17-25b93ecf7537-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.957856 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72675e09-5efb-4dc9-bc17-25b93ecf7537-server-conf\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.957896 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72675e09-5efb-4dc9-bc17-25b93ecf7537-config-data\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.957917 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpzfp\" (UniqueName: \"kubernetes.io/projected/72675e09-5efb-4dc9-bc17-25b93ecf7537-kube-api-access-hpzfp\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.957964 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72675e09-5efb-4dc9-bc17-25b93ecf7537-pod-info\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958007 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958052 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958195 4786 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01efacfa-e002-4e0d-aa6b-91217baa22ca-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958209 4786 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958668 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958683 4786 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958739 4786 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958756 4786 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958773 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbptq\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-kube-api-access-cbptq\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958815 4786 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01efacfa-e002-4e0d-aa6b-91217baa22ca-pod-info\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958827 4786 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.958839 4786 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01efacfa-e002-4e0d-aa6b-91217baa22ca-server-conf\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.961041 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.961888 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/72675e09-5efb-4dc9-bc17-25b93ecf7537-config-data\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.962629 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/72675e09-5efb-4dc9-bc17-25b93ecf7537-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.964556 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/72675e09-5efb-4dc9-bc17-25b93ecf7537-server-conf\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.965292 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.965835 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.970539 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/72675e09-5efb-4dc9-bc17-25b93ecf7537-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.974446 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.977295 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/72675e09-5efb-4dc9-bc17-25b93ecf7537-pod-info\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.981144 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/72675e09-5efb-4dc9-bc17-25b93ecf7537-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:37 crc kubenswrapper[4786]: I1209 09:10:37.986522 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpzfp\" (UniqueName: \"kubernetes.io/projected/72675e09-5efb-4dc9-bc17-25b93ecf7537-kube-api-access-hpzfp\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.034187 4786 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.039361 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-server-0\" (UID: \"72675e09-5efb-4dc9-bc17-25b93ecf7537\") " pod="openstack/rabbitmq-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.040802 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "01efacfa-e002-4e0d-aa6b-91217baa22ca" (UID: "01efacfa-e002-4e0d-aa6b-91217baa22ca"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.061359 4786 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01efacfa-e002-4e0d-aa6b-91217baa22ca-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.061409 4786 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.076806 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.414561 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"01efacfa-e002-4e0d-aa6b-91217baa22ca","Type":"ContainerDied","Data":"8186ac8d4f82381ea4ed7a940a2044bf5cee8bc12517a4da2b569593792bb6bf"} Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.415080 4786 scope.go:117] "RemoveContainer" containerID="4366b1674ffd020712dba1a21ba84648563b9bbd92a30327e54ce48d6a0fef93" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.414933 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.466475 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.472580 4786 scope.go:117] "RemoveContainer" containerID="707b5d59710fe68771b66a3ff78dabeef8338b89f750f1e932482d67d5771632" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.485869 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.519015 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.533500 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.535884 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.543198 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.543911 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.544154 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.544600 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-b8rz8" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.544808 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.544970 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.545166 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.649476 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.680078 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.680282 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mqrc\" (UniqueName: \"kubernetes.io/projected/f44944dd-abf7-402f-a3d4-93e17d0a760b-kube-api-access-4mqrc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.680362 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f44944dd-abf7-402f-a3d4-93e17d0a760b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.680511 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.680567 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f44944dd-abf7-402f-a3d4-93e17d0a760b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.680665 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.680715 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f44944dd-abf7-402f-a3d4-93e17d0a760b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.680759 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f44944dd-abf7-402f-a3d4-93e17d0a760b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.960729 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.960882 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f44944dd-abf7-402f-a3d4-93e17d0a760b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:38 crc kubenswrapper[4786]: I1209 09:10:38.960945 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.066264 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.066620 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f44944dd-abf7-402f-a3d4-93e17d0a760b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.066761 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f44944dd-abf7-402f-a3d4-93e17d0a760b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.066926 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.067049 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f44944dd-abf7-402f-a3d4-93e17d0a760b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.067145 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.067273 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.067436 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mqrc\" (UniqueName: \"kubernetes.io/projected/f44944dd-abf7-402f-a3d4-93e17d0a760b-kube-api-access-4mqrc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.067659 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f44944dd-abf7-402f-a3d4-93e17d0a760b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.067799 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.067909 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f44944dd-abf7-402f-a3d4-93e17d0a760b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.068139 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f44944dd-abf7-402f-a3d4-93e17d0a760b-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.066771 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.068323 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f44944dd-abf7-402f-a3d4-93e17d0a760b-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.069448 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.069608 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.071053 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f44944dd-abf7-402f-a3d4-93e17d0a760b-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.073791 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.073799 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f44944dd-abf7-402f-a3d4-93e17d0a760b-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.075170 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f44944dd-abf7-402f-a3d4-93e17d0a760b-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.079611 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f44944dd-abf7-402f-a3d4-93e17d0a760b-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.094248 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mqrc\" (UniqueName: \"kubernetes.io/projected/f44944dd-abf7-402f-a3d4-93e17d0a760b-kube-api-access-4mqrc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.145012 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f44944dd-abf7-402f-a3d4-93e17d0a760b\") " pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.183755 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.204773 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01efacfa-e002-4e0d-aa6b-91217baa22ca" path="/var/lib/kubelet/pods/01efacfa-e002-4e0d-aa6b-91217baa22ca/volumes" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.206942 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" path="/var/lib/kubelet/pods/b0a91d0e-2d71-4fdc-8d68-953a12dc7f59/volumes" Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.430583 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72675e09-5efb-4dc9-bc17-25b93ecf7537","Type":"ContainerStarted","Data":"816fd8f807ffb242f4796972781f916f6450cb4b41a6dc0b8ae8d5ab79c1e2ce"} Dec 09 09:10:39 crc kubenswrapper[4786]: I1209 09:10:39.697693 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 09 09:10:40 crc kubenswrapper[4786]: I1209 09:10:40.658875 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44944dd-abf7-402f-a3d4-93e17d0a760b","Type":"ContainerStarted","Data":"2c1385777833ed063baa0f4849568d8c8893bba4191e148e107a0befe136b812"} Dec 09 09:10:41 crc kubenswrapper[4786]: I1209 09:10:41.519697 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="b0a91d0e-2d71-4fdc-8d68-953a12dc7f59" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: i/o timeout" Dec 09 09:10:41 crc kubenswrapper[4786]: I1209 09:10:41.677209 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72675e09-5efb-4dc9-bc17-25b93ecf7537","Type":"ContainerStarted","Data":"a8da683ff09c01b6c5bbc5a5a4d2c6441485208882a06a3cd6d1db1e6e958696"} Dec 09 09:10:41 crc kubenswrapper[4786]: I1209 09:10:41.680539 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44944dd-abf7-402f-a3d4-93e17d0a760b","Type":"ContainerStarted","Data":"fc9d4379b7dfe24cce65afaa8776db9150ac3c0029039a69829c4fd85110032d"} Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.693611 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c5bdddf6c-vk992"] Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.699758 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.711832 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.870486 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c5bdddf6c-vk992"] Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.872234 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.872272 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-openstack-edpm-ipam\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.872350 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.872379 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmnrq\" (UniqueName: \"kubernetes.io/projected/eac54454-15e7-413a-a38e-e893b7bc8426-kube-api-access-xmnrq\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.872402 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-svc\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.872494 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-config\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.872570 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.974613 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.974768 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.974800 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-openstack-edpm-ipam\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.974905 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.974963 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmnrq\" (UniqueName: \"kubernetes.io/projected/eac54454-15e7-413a-a38e-e893b7bc8426-kube-api-access-xmnrq\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.975017 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-svc\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.975053 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-config\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.975997 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-nb\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.976706 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-sb\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.977046 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-config\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.977369 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-swift-storage-0\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.978040 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-openstack-edpm-ipam\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:47 crc kubenswrapper[4786]: I1209 09:10:47.978216 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-svc\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:48 crc kubenswrapper[4786]: I1209 09:10:48.032326 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmnrq\" (UniqueName: \"kubernetes.io/projected/eac54454-15e7-413a-a38e-e893b7bc8426-kube-api-access-xmnrq\") pod \"dnsmasq-dns-7c5bdddf6c-vk992\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:48 crc kubenswrapper[4786]: I1209 09:10:48.033185 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:48 crc kubenswrapper[4786]: I1209 09:10:48.696511 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c5bdddf6c-vk992"] Dec 09 09:10:48 crc kubenswrapper[4786]: I1209 09:10:48.893294 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" event={"ID":"eac54454-15e7-413a-a38e-e893b7bc8426","Type":"ContainerStarted","Data":"59c63ccb0a57e42e1d14065c874ea44aefe8ba88a3c5dd8dcf56ede6e24db6e4"} Dec 09 09:10:49 crc kubenswrapper[4786]: I1209 09:10:49.906248 4786 generic.go:334] "Generic (PLEG): container finished" podID="eac54454-15e7-413a-a38e-e893b7bc8426" containerID="bb238233942af0b6e743e3cbfd48840b5781f44a0b855ecdeaa785114510c847" exitCode=0 Dec 09 09:10:49 crc kubenswrapper[4786]: I1209 09:10:49.906750 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" event={"ID":"eac54454-15e7-413a-a38e-e893b7bc8426","Type":"ContainerDied","Data":"bb238233942af0b6e743e3cbfd48840b5781f44a0b855ecdeaa785114510c847"} Dec 09 09:10:50 crc kubenswrapper[4786]: I1209 09:10:50.920228 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" event={"ID":"eac54454-15e7-413a-a38e-e893b7bc8426","Type":"ContainerStarted","Data":"f1f9fe2a4bb20782db5745eb79970d7d8d043b67e05d4ff0bef1c911e0451763"} Dec 09 09:10:50 crc kubenswrapper[4786]: I1209 09:10:50.920659 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:50 crc kubenswrapper[4786]: I1209 09:10:50.947728 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" podStartSLOduration=3.9477093119999997 podStartE2EDuration="3.947709312s" podCreationTimestamp="2025-12-09 09:10:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:10:50.943880477 +0000 UTC m=+1616.827501703" watchObservedRunningTime="2025-12-09 09:10:50.947709312 +0000 UTC m=+1616.831330538" Dec 09 09:10:54 crc kubenswrapper[4786]: I1209 09:10:54.989500 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:10:54 crc kubenswrapper[4786]: I1209 09:10:54.990221 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.035419 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.107453 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854bf756b5-vh9h2"] Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.107838 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" podUID="013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" containerName="dnsmasq-dns" containerID="cri-o://bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1" gracePeriod=10 Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.282703 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-69b5dcdcbf-2dgn9"] Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.289358 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.312713 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69b5dcdcbf-2dgn9"] Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.479600 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-ovsdbserver-sb\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.479681 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jhn6\" (UniqueName: \"kubernetes.io/projected/973019e1-5fa9-49b7-b291-fdd553108517-kube-api-access-9jhn6\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.480373 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-openstack-edpm-ipam\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.480503 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-config\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.480616 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-dns-swift-storage-0\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.480655 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-ovsdbserver-nb\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.480701 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-dns-svc\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.583046 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-dns-svc\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.583200 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-ovsdbserver-sb\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.583263 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jhn6\" (UniqueName: \"kubernetes.io/projected/973019e1-5fa9-49b7-b291-fdd553108517-kube-api-access-9jhn6\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.583343 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-openstack-edpm-ipam\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.583443 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-config\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.583546 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-dns-swift-storage-0\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.583575 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-ovsdbserver-nb\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.586118 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-ovsdbserver-nb\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.586928 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-dns-svc\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.588505 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-ovsdbserver-sb\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.590838 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-openstack-edpm-ipam\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.591726 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-config\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.592632 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/973019e1-5fa9-49b7-b291-fdd553108517-dns-swift-storage-0\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.623848 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jhn6\" (UniqueName: \"kubernetes.io/projected/973019e1-5fa9-49b7-b291-fdd553108517-kube-api-access-9jhn6\") pod \"dnsmasq-dns-69b5dcdcbf-2dgn9\" (UID: \"973019e1-5fa9-49b7-b291-fdd553108517\") " pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.630662 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.765597 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.892684 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-sb\") pod \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.893106 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brd6m\" (UniqueName: \"kubernetes.io/projected/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-kube-api-access-brd6m\") pod \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.893147 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-svc\") pod \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.893180 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-swift-storage-0\") pod \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.893223 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-nb\") pod \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " Dec 09 09:10:58 crc kubenswrapper[4786]: I1209 09:10:58.893297 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-config\") pod \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\" (UID: \"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc\") " Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.131657 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-kube-api-access-brd6m" (OuterVolumeSpecName: "kube-api-access-brd6m") pod "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" (UID: "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc"). InnerVolumeSpecName "kube-api-access-brd6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.179456 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brd6m\" (UniqueName: \"kubernetes.io/projected/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-kube-api-access-brd6m\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.221134 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" (UID: "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.230467 4786 generic.go:334] "Generic (PLEG): container finished" podID="013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" containerID="bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1" exitCode=0 Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.231067 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.284643 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" (UID: "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.286177 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.286210 4786 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.286174 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-config" (OuterVolumeSpecName: "config") pod "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" (UID: "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.329529 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" (UID: "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.344940 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" (UID: "013d5e9c-08ad-4c20-8feb-e57e6d3b91dc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.388123 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.388411 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.388521 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.479010 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" event={"ID":"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc","Type":"ContainerDied","Data":"bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1"} Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.479288 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-854bf756b5-vh9h2" event={"ID":"013d5e9c-08ad-4c20-8feb-e57e6d3b91dc","Type":"ContainerDied","Data":"620f21b0a3a62127b565c4485547872d2456b4c220855d3add5a88b2093d1d73"} Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.479340 4786 scope.go:117] "RemoveContainer" containerID="bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.479358 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-69b5dcdcbf-2dgn9"] Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.524750 4786 scope.go:117] "RemoveContainer" containerID="4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.549964 4786 scope.go:117] "RemoveContainer" containerID="bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1" Dec 09 09:10:59 crc kubenswrapper[4786]: E1209 09:10:59.550514 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1\": container with ID starting with bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1 not found: ID does not exist" containerID="bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.550561 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1"} err="failed to get container status \"bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1\": rpc error: code = NotFound desc = could not find container \"bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1\": container with ID starting with bfa6875fe227a6dedacb3ff11ab3cc0a8dcf9e1a8267e33926d7a38a1e7191d1 not found: ID does not exist" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.550590 4786 scope.go:117] "RemoveContainer" containerID="4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5" Dec 09 09:10:59 crc kubenswrapper[4786]: E1209 09:10:59.551022 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5\": container with ID starting with 4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5 not found: ID does not exist" containerID="4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.551051 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5"} err="failed to get container status \"4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5\": rpc error: code = NotFound desc = could not find container \"4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5\": container with ID starting with 4f4dfe53e5ed82e386afd569d5659dd0267a941e932afd59eae70b4c0a54e8e5 not found: ID does not exist" Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.629014 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-854bf756b5-vh9h2"] Dec 09 09:10:59 crc kubenswrapper[4786]: I1209 09:10:59.640293 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-854bf756b5-vh9h2"] Dec 09 09:11:00 crc kubenswrapper[4786]: I1209 09:11:00.242126 4786 generic.go:334] "Generic (PLEG): container finished" podID="973019e1-5fa9-49b7-b291-fdd553108517" containerID="3868f3a04f93090952c77a7fe60ffb4516c29e98c47dee65000d45d044cc6e83" exitCode=0 Dec 09 09:11:00 crc kubenswrapper[4786]: I1209 09:11:00.242336 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" event={"ID":"973019e1-5fa9-49b7-b291-fdd553108517","Type":"ContainerDied","Data":"3868f3a04f93090952c77a7fe60ffb4516c29e98c47dee65000d45d044cc6e83"} Dec 09 09:11:00 crc kubenswrapper[4786]: I1209 09:11:00.243448 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" event={"ID":"973019e1-5fa9-49b7-b291-fdd553108517","Type":"ContainerStarted","Data":"d531d8bba49929f14371028e8030d0a70f7d4e6acc21116c31e981965d5f3a42"} Dec 09 09:11:01 crc kubenswrapper[4786]: I1209 09:11:01.203392 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" path="/var/lib/kubelet/pods/013d5e9c-08ad-4c20-8feb-e57e6d3b91dc/volumes" Dec 09 09:11:01 crc kubenswrapper[4786]: I1209 09:11:01.242480 4786 scope.go:117] "RemoveContainer" containerID="7a33c297eb318071eec7c91c8d62a9d40ac60dbb6223966d28a5508d876c643d" Dec 09 09:11:01 crc kubenswrapper[4786]: I1209 09:11:01.260714 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" event={"ID":"973019e1-5fa9-49b7-b291-fdd553108517","Type":"ContainerStarted","Data":"cf5e584143b568709e43ea7cfbe1b3b4ee2081962387d26e8485bafeb368730a"} Dec 09 09:11:01 crc kubenswrapper[4786]: I1209 09:11:01.260915 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:11:01 crc kubenswrapper[4786]: I1209 09:11:01.298409 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" podStartSLOduration=3.298388593 podStartE2EDuration="3.298388593s" podCreationTimestamp="2025-12-09 09:10:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:11:01.285166924 +0000 UTC m=+1627.168788170" watchObservedRunningTime="2025-12-09 09:11:01.298388593 +0000 UTC m=+1627.182009829" Dec 09 09:11:08 crc kubenswrapper[4786]: I1209 09:11:08.631712 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-69b5dcdcbf-2dgn9" Dec 09 09:11:08 crc kubenswrapper[4786]: I1209 09:11:08.720534 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c5bdddf6c-vk992"] Dec 09 09:11:08 crc kubenswrapper[4786]: I1209 09:11:08.720834 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" podUID="eac54454-15e7-413a-a38e-e893b7bc8426" containerName="dnsmasq-dns" containerID="cri-o://f1f9fe2a4bb20782db5745eb79970d7d8d043b67e05d4ff0bef1c911e0451763" gracePeriod=10 Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.361104 4786 generic.go:334] "Generic (PLEG): container finished" podID="eac54454-15e7-413a-a38e-e893b7bc8426" containerID="f1f9fe2a4bb20782db5745eb79970d7d8d043b67e05d4ff0bef1c911e0451763" exitCode=0 Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.361192 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" event={"ID":"eac54454-15e7-413a-a38e-e893b7bc8426","Type":"ContainerDied","Data":"f1f9fe2a4bb20782db5745eb79970d7d8d043b67e05d4ff0bef1c911e0451763"} Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.468929 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.531001 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-config\") pod \"eac54454-15e7-413a-a38e-e893b7bc8426\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.531192 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-swift-storage-0\") pod \"eac54454-15e7-413a-a38e-e893b7bc8426\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.531210 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-nb\") pod \"eac54454-15e7-413a-a38e-e893b7bc8426\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.531263 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmnrq\" (UniqueName: \"kubernetes.io/projected/eac54454-15e7-413a-a38e-e893b7bc8426-kube-api-access-xmnrq\") pod \"eac54454-15e7-413a-a38e-e893b7bc8426\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.531383 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-openstack-edpm-ipam\") pod \"eac54454-15e7-413a-a38e-e893b7bc8426\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.531445 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-svc\") pod \"eac54454-15e7-413a-a38e-e893b7bc8426\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.531490 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-sb\") pod \"eac54454-15e7-413a-a38e-e893b7bc8426\" (UID: \"eac54454-15e7-413a-a38e-e893b7bc8426\") " Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.543795 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eac54454-15e7-413a-a38e-e893b7bc8426-kube-api-access-xmnrq" (OuterVolumeSpecName: "kube-api-access-xmnrq") pod "eac54454-15e7-413a-a38e-e893b7bc8426" (UID: "eac54454-15e7-413a-a38e-e893b7bc8426"). InnerVolumeSpecName "kube-api-access-xmnrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.595920 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-config" (OuterVolumeSpecName: "config") pod "eac54454-15e7-413a-a38e-e893b7bc8426" (UID: "eac54454-15e7-413a-a38e-e893b7bc8426"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.595929 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "eac54454-15e7-413a-a38e-e893b7bc8426" (UID: "eac54454-15e7-413a-a38e-e893b7bc8426"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.597872 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "eac54454-15e7-413a-a38e-e893b7bc8426" (UID: "eac54454-15e7-413a-a38e-e893b7bc8426"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.600495 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eac54454-15e7-413a-a38e-e893b7bc8426" (UID: "eac54454-15e7-413a-a38e-e893b7bc8426"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.604745 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eac54454-15e7-413a-a38e-e893b7bc8426" (UID: "eac54454-15e7-413a-a38e-e893b7bc8426"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.610747 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eac54454-15e7-413a-a38e-e893b7bc8426" (UID: "eac54454-15e7-413a-a38e-e893b7bc8426"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.637017 4786 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.637070 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.637084 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmnrq\" (UniqueName: \"kubernetes.io/projected/eac54454-15e7-413a-a38e-e893b7bc8426-kube-api-access-xmnrq\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.637145 4786 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.637160 4786 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.637172 4786 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:09 crc kubenswrapper[4786]: I1209 09:11:09.637352 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eac54454-15e7-413a-a38e-e893b7bc8426-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:10 crc kubenswrapper[4786]: I1209 09:11:10.377354 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" event={"ID":"eac54454-15e7-413a-a38e-e893b7bc8426","Type":"ContainerDied","Data":"59c63ccb0a57e42e1d14065c874ea44aefe8ba88a3c5dd8dcf56ede6e24db6e4"} Dec 09 09:11:10 crc kubenswrapper[4786]: I1209 09:11:10.377744 4786 scope.go:117] "RemoveContainer" containerID="f1f9fe2a4bb20782db5745eb79970d7d8d043b67e05d4ff0bef1c911e0451763" Dec 09 09:11:10 crc kubenswrapper[4786]: I1209 09:11:10.377438 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c5bdddf6c-vk992" Dec 09 09:11:10 crc kubenswrapper[4786]: I1209 09:11:10.414724 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c5bdddf6c-vk992"] Dec 09 09:11:10 crc kubenswrapper[4786]: I1209 09:11:10.415240 4786 scope.go:117] "RemoveContainer" containerID="bb238233942af0b6e743e3cbfd48840b5781f44a0b855ecdeaa785114510c847" Dec 09 09:11:10 crc kubenswrapper[4786]: I1209 09:11:10.425255 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c5bdddf6c-vk992"] Dec 09 09:11:11 crc kubenswrapper[4786]: I1209 09:11:11.211287 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eac54454-15e7-413a-a38e-e893b7bc8426" path="/var/lib/kubelet/pods/eac54454-15e7-413a-a38e-e893b7bc8426/volumes" Dec 09 09:11:13 crc kubenswrapper[4786]: I1209 09:11:13.422761 4786 generic.go:334] "Generic (PLEG): container finished" podID="72675e09-5efb-4dc9-bc17-25b93ecf7537" containerID="a8da683ff09c01b6c5bbc5a5a4d2c6441485208882a06a3cd6d1db1e6e958696" exitCode=0 Dec 09 09:11:13 crc kubenswrapper[4786]: I1209 09:11:13.422986 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72675e09-5efb-4dc9-bc17-25b93ecf7537","Type":"ContainerDied","Data":"a8da683ff09c01b6c5bbc5a5a4d2c6441485208882a06a3cd6d1db1e6e958696"} Dec 09 09:11:14 crc kubenswrapper[4786]: I1209 09:11:14.439144 4786 generic.go:334] "Generic (PLEG): container finished" podID="f44944dd-abf7-402f-a3d4-93e17d0a760b" containerID="fc9d4379b7dfe24cce65afaa8776db9150ac3c0029039a69829c4fd85110032d" exitCode=0 Dec 09 09:11:14 crc kubenswrapper[4786]: I1209 09:11:14.439209 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44944dd-abf7-402f-a3d4-93e17d0a760b","Type":"ContainerDied","Data":"fc9d4379b7dfe24cce65afaa8776db9150ac3c0029039a69829c4fd85110032d"} Dec 09 09:11:14 crc kubenswrapper[4786]: I1209 09:11:14.445125 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"72675e09-5efb-4dc9-bc17-25b93ecf7537","Type":"ContainerStarted","Data":"011ea20b8930b033dfebfa26019ce4fc97bee02648e8424861595e699306dbdb"} Dec 09 09:11:14 crc kubenswrapper[4786]: I1209 09:11:14.445493 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 09 09:11:14 crc kubenswrapper[4786]: I1209 09:11:14.669061 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.669040704 podStartE2EDuration="37.669040704s" podCreationTimestamp="2025-12-09 09:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:11:14.664994403 +0000 UTC m=+1640.548615639" watchObservedRunningTime="2025-12-09 09:11:14.669040704 +0000 UTC m=+1640.552661930" Dec 09 09:11:15 crc kubenswrapper[4786]: I1209 09:11:15.456000 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f44944dd-abf7-402f-a3d4-93e17d0a760b","Type":"ContainerStarted","Data":"f3b58e36c255ebbea4aea7afbc98b015839b21ddad2c91cbba7b59cacafc6a9a"} Dec 09 09:11:15 crc kubenswrapper[4786]: I1209 09:11:15.456626 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:11:15 crc kubenswrapper[4786]: I1209 09:11:15.489067 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.489039422 podStartE2EDuration="37.489039422s" podCreationTimestamp="2025-12-09 09:10:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:11:15.485062983 +0000 UTC m=+1641.368684209" watchObservedRunningTime="2025-12-09 09:11:15.489039422 +0000 UTC m=+1641.372660668" Dec 09 09:11:24 crc kubenswrapper[4786]: I1209 09:11:24.990983 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:11:24 crc kubenswrapper[4786]: I1209 09:11:24.991628 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.141544 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm"] Dec 09 09:11:27 crc kubenswrapper[4786]: E1209 09:11:27.153805 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" containerName="dnsmasq-dns" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.153911 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" containerName="dnsmasq-dns" Dec 09 09:11:27 crc kubenswrapper[4786]: E1209 09:11:27.153984 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" containerName="init" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.154040 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" containerName="init" Dec 09 09:11:27 crc kubenswrapper[4786]: E1209 09:11:27.154132 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac54454-15e7-413a-a38e-e893b7bc8426" containerName="init" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.154191 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac54454-15e7-413a-a38e-e893b7bc8426" containerName="init" Dec 09 09:11:27 crc kubenswrapper[4786]: E1209 09:11:27.154253 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eac54454-15e7-413a-a38e-e893b7bc8426" containerName="dnsmasq-dns" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.154308 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="eac54454-15e7-413a-a38e-e893b7bc8426" containerName="dnsmasq-dns" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.154620 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="eac54454-15e7-413a-a38e-e893b7bc8426" containerName="dnsmasq-dns" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.154714 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="013d5e9c-08ad-4c20-8feb-e57e6d3b91dc" containerName="dnsmasq-dns" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.155597 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.165810 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.165809 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.166018 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.168164 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm"] Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.169378 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.291371 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.291480 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj7w7\" (UniqueName: \"kubernetes.io/projected/c88eeae2-e339-4f55-a0ee-c5fa8e611253-kube-api-access-rj7w7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.291616 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.292762 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.395230 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.395298 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.395324 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj7w7\" (UniqueName: \"kubernetes.io/projected/c88eeae2-e339-4f55-a0ee-c5fa8e611253-kube-api-access-rj7w7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.395375 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.404245 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.409175 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.416751 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.419392 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj7w7\" (UniqueName: \"kubernetes.io/projected/c88eeae2-e339-4f55-a0ee-c5fa8e611253-kube-api-access-rj7w7\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:27 crc kubenswrapper[4786]: I1209 09:11:27.500573 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:28 crc kubenswrapper[4786]: I1209 09:11:28.078665 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="72675e09-5efb-4dc9-bc17-25b93ecf7537" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.227:5671: connect: connection refused" Dec 09 09:11:28 crc kubenswrapper[4786]: I1209 09:11:28.125997 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm"] Dec 09 09:11:28 crc kubenswrapper[4786]: W1209 09:11:28.126362 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc88eeae2_e339_4f55_a0ee_c5fa8e611253.slice/crio-08c99ea6a524aa0ed6c21af6c1b39e3885a38f95b68989c8f31cd82ea8af783c WatchSource:0}: Error finding container 08c99ea6a524aa0ed6c21af6c1b39e3885a38f95b68989c8f31cd82ea8af783c: Status 404 returned error can't find the container with id 08c99ea6a524aa0ed6c21af6c1b39e3885a38f95b68989c8f31cd82ea8af783c Dec 09 09:11:28 crc kubenswrapper[4786]: I1209 09:11:28.130212 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:11:28 crc kubenswrapper[4786]: I1209 09:11:28.644534 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" event={"ID":"c88eeae2-e339-4f55-a0ee-c5fa8e611253","Type":"ContainerStarted","Data":"08c99ea6a524aa0ed6c21af6c1b39e3885a38f95b68989c8f31cd82ea8af783c"} Dec 09 09:11:29 crc kubenswrapper[4786]: I1209 09:11:29.188578 4786 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="f44944dd-abf7-402f-a3d4-93e17d0a760b" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.228:5671: connect: connection refused" Dec 09 09:11:38 crc kubenswrapper[4786]: I1209 09:11:38.088686 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 09 09:11:39 crc kubenswrapper[4786]: I1209 09:11:39.229361 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 09 09:11:40 crc kubenswrapper[4786]: I1209 09:11:40.838720 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" event={"ID":"c88eeae2-e339-4f55-a0ee-c5fa8e611253","Type":"ContainerStarted","Data":"09c9107f338dd880d75a1d32dcda25241952f30f3f3b270f209c4d8ab105ace6"} Dec 09 09:11:40 crc kubenswrapper[4786]: I1209 09:11:40.860412 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" podStartSLOduration=2.434766576 podStartE2EDuration="13.860384131s" podCreationTimestamp="2025-12-09 09:11:27 +0000 UTC" firstStartedPulling="2025-12-09 09:11:28.129888048 +0000 UTC m=+1654.013509274" lastFinishedPulling="2025-12-09 09:11:39.555505603 +0000 UTC m=+1665.439126829" observedRunningTime="2025-12-09 09:11:40.858377722 +0000 UTC m=+1666.741998948" watchObservedRunningTime="2025-12-09 09:11:40.860384131 +0000 UTC m=+1666.744005357" Dec 09 09:11:51 crc kubenswrapper[4786]: I1209 09:11:51.981447 4786 generic.go:334] "Generic (PLEG): container finished" podID="c88eeae2-e339-4f55-a0ee-c5fa8e611253" containerID="09c9107f338dd880d75a1d32dcda25241952f30f3f3b270f209c4d8ab105ace6" exitCode=0 Dec 09 09:11:51 crc kubenswrapper[4786]: I1209 09:11:51.981563 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" event={"ID":"c88eeae2-e339-4f55-a0ee-c5fa8e611253","Type":"ContainerDied","Data":"09c9107f338dd880d75a1d32dcda25241952f30f3f3b270f209c4d8ab105ace6"} Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.530583 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.555100 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-ssh-key\") pod \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.555359 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj7w7\" (UniqueName: \"kubernetes.io/projected/c88eeae2-e339-4f55-a0ee-c5fa8e611253-kube-api-access-rj7w7\") pod \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.555590 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-repo-setup-combined-ca-bundle\") pod \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.555634 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-inventory\") pod \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\" (UID: \"c88eeae2-e339-4f55-a0ee-c5fa8e611253\") " Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.563094 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c88eeae2-e339-4f55-a0ee-c5fa8e611253-kube-api-access-rj7w7" (OuterVolumeSpecName: "kube-api-access-rj7w7") pod "c88eeae2-e339-4f55-a0ee-c5fa8e611253" (UID: "c88eeae2-e339-4f55-a0ee-c5fa8e611253"). InnerVolumeSpecName "kube-api-access-rj7w7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.581092 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "c88eeae2-e339-4f55-a0ee-c5fa8e611253" (UID: "c88eeae2-e339-4f55-a0ee-c5fa8e611253"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.591444 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c88eeae2-e339-4f55-a0ee-c5fa8e611253" (UID: "c88eeae2-e339-4f55-a0ee-c5fa8e611253"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.598534 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-inventory" (OuterVolumeSpecName: "inventory") pod "c88eeae2-e339-4f55-a0ee-c5fa8e611253" (UID: "c88eeae2-e339-4f55-a0ee-c5fa8e611253"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.659086 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj7w7\" (UniqueName: \"kubernetes.io/projected/c88eeae2-e339-4f55-a0ee-c5fa8e611253-kube-api-access-rj7w7\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.659243 4786 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.659342 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:53 crc kubenswrapper[4786]: I1209 09:11:53.659401 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c88eeae2-e339-4f55-a0ee-c5fa8e611253-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.007630 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" event={"ID":"c88eeae2-e339-4f55-a0ee-c5fa8e611253","Type":"ContainerDied","Data":"08c99ea6a524aa0ed6c21af6c1b39e3885a38f95b68989c8f31cd82ea8af783c"} Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.008467 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08c99ea6a524aa0ed6c21af6c1b39e3885a38f95b68989c8f31cd82ea8af783c" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.007910 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.161010 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n"] Dec 09 09:11:54 crc kubenswrapper[4786]: E1209 09:11:54.162447 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c88eeae2-e339-4f55-a0ee-c5fa8e611253" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.162552 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c88eeae2-e339-4f55-a0ee-c5fa8e611253" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.163002 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c88eeae2-e339-4f55-a0ee-c5fa8e611253" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.164149 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.168060 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.168396 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.171042 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.175211 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.185757 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n"] Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.273598 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hw2g\" (UniqueName: \"kubernetes.io/projected/eb43b8bf-02ae-4d5d-82f6-3262125035f1-kube-api-access-5hw2g\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hmb7n\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.273688 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hmb7n\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.273722 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hmb7n\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.376128 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hw2g\" (UniqueName: \"kubernetes.io/projected/eb43b8bf-02ae-4d5d-82f6-3262125035f1-kube-api-access-5hw2g\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hmb7n\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.376232 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hmb7n\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.376274 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hmb7n\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.380488 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hmb7n\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.384319 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hmb7n\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.394890 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hw2g\" (UniqueName: \"kubernetes.io/projected/eb43b8bf-02ae-4d5d-82f6-3262125035f1-kube-api-access-5hw2g\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-hmb7n\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.489992 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.990605 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.990845 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.990895 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.991815 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:11:54 crc kubenswrapper[4786]: I1209 09:11:54.991887 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" gracePeriod=600 Dec 09 09:11:55 crc kubenswrapper[4786]: I1209 09:11:55.067027 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n"] Dec 09 09:11:55 crc kubenswrapper[4786]: E1209 09:11:55.132556 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:11:55 crc kubenswrapper[4786]: I1209 09:11:55.673977 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:11:56 crc kubenswrapper[4786]: I1209 09:11:56.039019 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" event={"ID":"eb43b8bf-02ae-4d5d-82f6-3262125035f1","Type":"ContainerStarted","Data":"61bdbf72e823038aba56283017e41b2fc0846668537b81abe0605e63eb9b927c"} Dec 09 09:11:56 crc kubenswrapper[4786]: I1209 09:11:56.039085 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" event={"ID":"eb43b8bf-02ae-4d5d-82f6-3262125035f1","Type":"ContainerStarted","Data":"17c75fe39e53b0b0f63879078734fa00475122a8475ae866981ef4b5c9bb57d4"} Dec 09 09:11:56 crc kubenswrapper[4786]: I1209 09:11:56.042455 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" exitCode=0 Dec 09 09:11:56 crc kubenswrapper[4786]: I1209 09:11:56.042511 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197"} Dec 09 09:11:56 crc kubenswrapper[4786]: I1209 09:11:56.042555 4786 scope.go:117] "RemoveContainer" containerID="6861e8a37988a9e19bbc4cef34d4d5e8b2d44819ea8091141fe025d3c9cd2383" Dec 09 09:11:56 crc kubenswrapper[4786]: I1209 09:11:56.043320 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:11:56 crc kubenswrapper[4786]: E1209 09:11:56.043592 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:11:56 crc kubenswrapper[4786]: I1209 09:11:56.085537 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" podStartSLOduration=1.492051273 podStartE2EDuration="2.085511671s" podCreationTimestamp="2025-12-09 09:11:54 +0000 UTC" firstStartedPulling="2025-12-09 09:11:55.076825483 +0000 UTC m=+1680.960446709" lastFinishedPulling="2025-12-09 09:11:55.670285871 +0000 UTC m=+1681.553907107" observedRunningTime="2025-12-09 09:11:56.068406456 +0000 UTC m=+1681.952027682" watchObservedRunningTime="2025-12-09 09:11:56.085511671 +0000 UTC m=+1681.969132897" Dec 09 09:11:59 crc kubenswrapper[4786]: I1209 09:11:59.087005 4786 generic.go:334] "Generic (PLEG): container finished" podID="eb43b8bf-02ae-4d5d-82f6-3262125035f1" containerID="61bdbf72e823038aba56283017e41b2fc0846668537b81abe0605e63eb9b927c" exitCode=0 Dec 09 09:11:59 crc kubenswrapper[4786]: I1209 09:11:59.087108 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" event={"ID":"eb43b8bf-02ae-4d5d-82f6-3262125035f1","Type":"ContainerDied","Data":"61bdbf72e823038aba56283017e41b2fc0846668537b81abe0605e63eb9b927c"} Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.585035 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.729064 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-inventory\") pod \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.729203 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hw2g\" (UniqueName: \"kubernetes.io/projected/eb43b8bf-02ae-4d5d-82f6-3262125035f1-kube-api-access-5hw2g\") pod \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.729583 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-ssh-key\") pod \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\" (UID: \"eb43b8bf-02ae-4d5d-82f6-3262125035f1\") " Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.737364 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb43b8bf-02ae-4d5d-82f6-3262125035f1-kube-api-access-5hw2g" (OuterVolumeSpecName: "kube-api-access-5hw2g") pod "eb43b8bf-02ae-4d5d-82f6-3262125035f1" (UID: "eb43b8bf-02ae-4d5d-82f6-3262125035f1"). InnerVolumeSpecName "kube-api-access-5hw2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.767851 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-inventory" (OuterVolumeSpecName: "inventory") pod "eb43b8bf-02ae-4d5d-82f6-3262125035f1" (UID: "eb43b8bf-02ae-4d5d-82f6-3262125035f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.773757 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb43b8bf-02ae-4d5d-82f6-3262125035f1" (UID: "eb43b8bf-02ae-4d5d-82f6-3262125035f1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.832497 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.832530 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eb43b8bf-02ae-4d5d-82f6-3262125035f1-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:12:00 crc kubenswrapper[4786]: I1209 09:12:00.832541 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hw2g\" (UniqueName: \"kubernetes.io/projected/eb43b8bf-02ae-4d5d-82f6-3262125035f1-kube-api-access-5hw2g\") on node \"crc\" DevicePath \"\"" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.112348 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" event={"ID":"eb43b8bf-02ae-4d5d-82f6-3262125035f1","Type":"ContainerDied","Data":"17c75fe39e53b0b0f63879078734fa00475122a8475ae866981ef4b5c9bb57d4"} Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.112405 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17c75fe39e53b0b0f63879078734fa00475122a8475ae866981ef4b5c9bb57d4" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.112460 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-hmb7n" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.210332 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz"] Dec 09 09:12:01 crc kubenswrapper[4786]: E1209 09:12:01.215097 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb43b8bf-02ae-4d5d-82f6-3262125035f1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.215137 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb43b8bf-02ae-4d5d-82f6-3262125035f1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.215512 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb43b8bf-02ae-4d5d-82f6-3262125035f1" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.216553 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.218891 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.218942 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.218976 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.218891 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.222236 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz"] Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.343863 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.345471 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.346160 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rfr6\" (UniqueName: \"kubernetes.io/projected/ebb0da1f-f03a-4091-9057-2d250dd6bc07-kube-api-access-8rfr6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.346725 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.423865 4786 scope.go:117] "RemoveContainer" containerID="e9045f5ac311801162c4a289491344af5209e8edb69b17eb036785f4844ad0df" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.445793 4786 scope.go:117] "RemoveContainer" containerID="937b9f4d291af6d40a7832aa4dbfc0fd57b1258b504f6ad5aea827fab7ad3b54" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.451680 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rfr6\" (UniqueName: \"kubernetes.io/projected/ebb0da1f-f03a-4091-9057-2d250dd6bc07-kube-api-access-8rfr6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.451832 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.452047 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.452153 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.458885 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.459668 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.460859 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.473745 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rfr6\" (UniqueName: \"kubernetes.io/projected/ebb0da1f-f03a-4091-9057-2d250dd6bc07-kube-api-access-8rfr6\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.479196 4786 scope.go:117] "RemoveContainer" containerID="00d414414fa82fdc80f7f1deb244a6a0c2ec6c083db69e22bceffe4abaf9dab2" Dec 09 09:12:01 crc kubenswrapper[4786]: I1209 09:12:01.537108 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:12:02 crc kubenswrapper[4786]: I1209 09:12:02.188827 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz"] Dec 09 09:12:03 crc kubenswrapper[4786]: I1209 09:12:03.135858 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" event={"ID":"ebb0da1f-f03a-4091-9057-2d250dd6bc07","Type":"ContainerStarted","Data":"036bbccac8f9bca29c2e27a3f0947992f11dffba30d5204ee50815bfe30df19b"} Dec 09 09:12:03 crc kubenswrapper[4786]: I1209 09:12:03.136285 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" event={"ID":"ebb0da1f-f03a-4091-9057-2d250dd6bc07","Type":"ContainerStarted","Data":"73601fd89ed990fedf93793daa5e170e5e8bdb4982c6b6fcecee16be62aed12f"} Dec 09 09:12:03 crc kubenswrapper[4786]: I1209 09:12:03.158741 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" podStartSLOduration=1.6895881689999999 podStartE2EDuration="2.15871341s" podCreationTimestamp="2025-12-09 09:12:01 +0000 UTC" firstStartedPulling="2025-12-09 09:12:02.193386266 +0000 UTC m=+1688.077007492" lastFinishedPulling="2025-12-09 09:12:02.662511507 +0000 UTC m=+1688.546132733" observedRunningTime="2025-12-09 09:12:03.155103511 +0000 UTC m=+1689.038724737" watchObservedRunningTime="2025-12-09 09:12:03.15871341 +0000 UTC m=+1689.042334696" Dec 09 09:12:09 crc kubenswrapper[4786]: I1209 09:12:09.187862 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:12:09 crc kubenswrapper[4786]: E1209 09:12:09.188379 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:12:22 crc kubenswrapper[4786]: I1209 09:12:22.189492 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:12:22 crc kubenswrapper[4786]: E1209 09:12:22.191883 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:12:33 crc kubenswrapper[4786]: I1209 09:12:33.188582 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:12:33 crc kubenswrapper[4786]: E1209 09:12:33.189382 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:12:44 crc kubenswrapper[4786]: I1209 09:12:44.188583 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:12:44 crc kubenswrapper[4786]: E1209 09:12:44.189702 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:12:58 crc kubenswrapper[4786]: I1209 09:12:58.188017 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:12:58 crc kubenswrapper[4786]: E1209 09:12:58.188899 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:13:11 crc kubenswrapper[4786]: I1209 09:13:11.189749 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:13:11 crc kubenswrapper[4786]: E1209 09:13:11.191562 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:13:26 crc kubenswrapper[4786]: I1209 09:13:26.188850 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:13:26 crc kubenswrapper[4786]: E1209 09:13:26.189643 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:13:40 crc kubenswrapper[4786]: I1209 09:13:40.188460 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:13:40 crc kubenswrapper[4786]: E1209 09:13:40.189695 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:13:52 crc kubenswrapper[4786]: I1209 09:13:52.189186 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:13:52 crc kubenswrapper[4786]: E1209 09:13:52.189937 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:14:01 crc kubenswrapper[4786]: I1209 09:14:01.705782 4786 scope.go:117] "RemoveContainer" containerID="52bbfbd410bb33df616342360075e577b9c18a627484ef75577a23a4347a7ebd" Dec 09 09:14:01 crc kubenswrapper[4786]: I1209 09:14:01.744572 4786 scope.go:117] "RemoveContainer" containerID="3e67b9c8da3783a6be6596bf5465e26fbef0a2c25e297dee79a95dade6bb6811" Dec 09 09:14:07 crc kubenswrapper[4786]: I1209 09:14:07.189563 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:14:07 crc kubenswrapper[4786]: E1209 09:14:07.190862 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:14:19 crc kubenswrapper[4786]: I1209 09:14:19.188520 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:14:19 crc kubenswrapper[4786]: E1209 09:14:19.189506 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:14:30 crc kubenswrapper[4786]: I1209 09:14:30.188082 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:14:30 crc kubenswrapper[4786]: E1209 09:14:30.189059 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:14:44 crc kubenswrapper[4786]: I1209 09:14:44.188520 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:14:44 crc kubenswrapper[4786]: E1209 09:14:44.189306 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:14:56 crc kubenswrapper[4786]: I1209 09:14:56.050149 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-w76vw"] Dec 09 09:14:56 crc kubenswrapper[4786]: I1209 09:14:56.062637 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ncqnq"] Dec 09 09:14:56 crc kubenswrapper[4786]: I1209 09:14:56.072151 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-w76vw"] Dec 09 09:14:56 crc kubenswrapper[4786]: I1209 09:14:56.081986 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ncqnq"] Dec 09 09:14:57 crc kubenswrapper[4786]: I1209 09:14:57.035678 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-4pxhc"] Dec 09 09:14:57 crc kubenswrapper[4786]: I1209 09:14:57.048448 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-4pxhc"] Dec 09 09:14:57 crc kubenswrapper[4786]: I1209 09:14:57.202909 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d297c4a-7ce4-4c61-9e72-d4503808c184" path="/var/lib/kubelet/pods/3d297c4a-7ce4-4c61-9e72-d4503808c184/volumes" Dec 09 09:14:57 crc kubenswrapper[4786]: I1209 09:14:57.203768 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ab18c7-d1f2-4819-a35d-4e268535e616" path="/var/lib/kubelet/pods/e4ab18c7-d1f2-4819-a35d-4e268535e616/volumes" Dec 09 09:14:57 crc kubenswrapper[4786]: I1209 09:14:57.204678 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdb0b7af-fae9-45a5-924b-a3507c046c28" path="/var/lib/kubelet/pods/fdb0b7af-fae9-45a5-924b-a3507c046c28/volumes" Dec 09 09:14:59 crc kubenswrapper[4786]: I1209 09:14:59.188996 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:14:59 crc kubenswrapper[4786]: E1209 09:14:59.190256 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.164346 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm"] Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.167178 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.174514 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.185392 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.186923 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm"] Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.263039 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2591ab-76a1-4771-b492-51ecf18d10c2-config-volume\") pod \"collect-profiles-29421195-pjqqm\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.263895 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2591ab-76a1-4771-b492-51ecf18d10c2-secret-volume\") pod \"collect-profiles-29421195-pjqqm\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.264086 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6vcr\" (UniqueName: \"kubernetes.io/projected/af2591ab-76a1-4771-b492-51ecf18d10c2-kube-api-access-k6vcr\") pod \"collect-profiles-29421195-pjqqm\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.367295 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2591ab-76a1-4771-b492-51ecf18d10c2-config-volume\") pod \"collect-profiles-29421195-pjqqm\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.367371 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2591ab-76a1-4771-b492-51ecf18d10c2-secret-volume\") pod \"collect-profiles-29421195-pjqqm\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.367454 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6vcr\" (UniqueName: \"kubernetes.io/projected/af2591ab-76a1-4771-b492-51ecf18d10c2-kube-api-access-k6vcr\") pod \"collect-profiles-29421195-pjqqm\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.368379 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2591ab-76a1-4771-b492-51ecf18d10c2-config-volume\") pod \"collect-profiles-29421195-pjqqm\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.376230 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2591ab-76a1-4771-b492-51ecf18d10c2-secret-volume\") pod \"collect-profiles-29421195-pjqqm\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.388241 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6vcr\" (UniqueName: \"kubernetes.io/projected/af2591ab-76a1-4771-b492-51ecf18d10c2-kube-api-access-k6vcr\") pod \"collect-profiles-29421195-pjqqm\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.497052 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:00 crc kubenswrapper[4786]: I1209 09:15:00.988566 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm"] Dec 09 09:15:01 crc kubenswrapper[4786]: I1209 09:15:01.226984 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" event={"ID":"af2591ab-76a1-4771-b492-51ecf18d10c2","Type":"ContainerStarted","Data":"dd6fb488008132751b6ca8d12537856f308b0e4286d217027c632398bdd11702"} Dec 09 09:15:01 crc kubenswrapper[4786]: I1209 09:15:01.829730 4786 scope.go:117] "RemoveContainer" containerID="30f6998317760e7b163f42a1f80d6446132f2ea5dea186095670273af5b7770d" Dec 09 09:15:01 crc kubenswrapper[4786]: I1209 09:15:01.860933 4786 scope.go:117] "RemoveContainer" containerID="d2c2e20b1347eb5d6e31645fefa42e09577123db609cca223caddfd24a79f4f6" Dec 09 09:15:01 crc kubenswrapper[4786]: I1209 09:15:01.910875 4786 scope.go:117] "RemoveContainer" containerID="edfe652a450bcb713cd635c780b3593e0729e03e312059058d3d01c33f473c2a" Dec 09 09:15:02 crc kubenswrapper[4786]: I1209 09:15:02.239853 4786 generic.go:334] "Generic (PLEG): container finished" podID="af2591ab-76a1-4771-b492-51ecf18d10c2" containerID="906d507b4881a7780cae22e6a19ee0376383c551845c372f5b7b073852803214" exitCode=0 Dec 09 09:15:02 crc kubenswrapper[4786]: I1209 09:15:02.239906 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" event={"ID":"af2591ab-76a1-4771-b492-51ecf18d10c2","Type":"ContainerDied","Data":"906d507b4881a7780cae22e6a19ee0376383c551845c372f5b7b073852803214"} Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.651782 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.842230 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2591ab-76a1-4771-b492-51ecf18d10c2-config-volume\") pod \"af2591ab-76a1-4771-b492-51ecf18d10c2\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.842320 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6vcr\" (UniqueName: \"kubernetes.io/projected/af2591ab-76a1-4771-b492-51ecf18d10c2-kube-api-access-k6vcr\") pod \"af2591ab-76a1-4771-b492-51ecf18d10c2\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.842582 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2591ab-76a1-4771-b492-51ecf18d10c2-secret-volume\") pod \"af2591ab-76a1-4771-b492-51ecf18d10c2\" (UID: \"af2591ab-76a1-4771-b492-51ecf18d10c2\") " Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.843197 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af2591ab-76a1-4771-b492-51ecf18d10c2-config-volume" (OuterVolumeSpecName: "config-volume") pod "af2591ab-76a1-4771-b492-51ecf18d10c2" (UID: "af2591ab-76a1-4771-b492-51ecf18d10c2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.848672 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af2591ab-76a1-4771-b492-51ecf18d10c2-kube-api-access-k6vcr" (OuterVolumeSpecName: "kube-api-access-k6vcr") pod "af2591ab-76a1-4771-b492-51ecf18d10c2" (UID: "af2591ab-76a1-4771-b492-51ecf18d10c2"). InnerVolumeSpecName "kube-api-access-k6vcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.854784 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af2591ab-76a1-4771-b492-51ecf18d10c2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "af2591ab-76a1-4771-b492-51ecf18d10c2" (UID: "af2591ab-76a1-4771-b492-51ecf18d10c2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.944750 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/af2591ab-76a1-4771-b492-51ecf18d10c2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.944789 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6vcr\" (UniqueName: \"kubernetes.io/projected/af2591ab-76a1-4771-b492-51ecf18d10c2-kube-api-access-k6vcr\") on node \"crc\" DevicePath \"\"" Dec 09 09:15:03 crc kubenswrapper[4786]: I1209 09:15:03.944801 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/af2591ab-76a1-4771-b492-51ecf18d10c2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 09:15:04 crc kubenswrapper[4786]: I1209 09:15:04.261808 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" event={"ID":"af2591ab-76a1-4771-b492-51ecf18d10c2","Type":"ContainerDied","Data":"dd6fb488008132751b6ca8d12537856f308b0e4286d217027c632398bdd11702"} Dec 09 09:15:04 crc kubenswrapper[4786]: I1209 09:15:04.261862 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd6fb488008132751b6ca8d12537856f308b0e4286d217027c632398bdd11702" Dec 09 09:15:04 crc kubenswrapper[4786]: I1209 09:15:04.261895 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm" Dec 09 09:15:07 crc kubenswrapper[4786]: I1209 09:15:07.226926 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-7bd4-account-create-kl59b"] Dec 09 09:15:07 crc kubenswrapper[4786]: I1209 09:15:07.233263 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-7bd4-account-create-kl59b"] Dec 09 09:15:08 crc kubenswrapper[4786]: I1209 09:15:08.030871 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-c1d9-account-create-x7vwp"] Dec 09 09:15:08 crc kubenswrapper[4786]: I1209 09:15:08.040926 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-c1d9-account-create-x7vwp"] Dec 09 09:15:09 crc kubenswrapper[4786]: I1209 09:15:09.206047 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36ed95ac-bc8e-4195-b0a1-a66bb48df015" path="/var/lib/kubelet/pods/36ed95ac-bc8e-4195-b0a1-a66bb48df015/volumes" Dec 09 09:15:09 crc kubenswrapper[4786]: I1209 09:15:09.207687 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6654fc3-1c4f-49ea-9dc4-ca16f842bead" path="/var/lib/kubelet/pods/a6654fc3-1c4f-49ea-9dc4-ca16f842bead/volumes" Dec 09 09:15:12 crc kubenswrapper[4786]: I1209 09:15:12.036558 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-4hdd8"] Dec 09 09:15:12 crc kubenswrapper[4786]: I1209 09:15:12.048467 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-4hdd8"] Dec 09 09:15:13 crc kubenswrapper[4786]: I1209 09:15:13.100783 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-84bdh"] Dec 09 09:15:13 crc kubenswrapper[4786]: I1209 09:15:13.111822 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-84bdh"] Dec 09 09:15:13 crc kubenswrapper[4786]: I1209 09:15:13.202650 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19dcdb84-6784-4719-ba48-a8b0ec1f9af2" path="/var/lib/kubelet/pods/19dcdb84-6784-4719-ba48-a8b0ec1f9af2/volumes" Dec 09 09:15:13 crc kubenswrapper[4786]: I1209 09:15:13.203640 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80996405-8157-449e-b510-652bbe2f3fb7" path="/var/lib/kubelet/pods/80996405-8157-449e-b510-652bbe2f3fb7/volumes" Dec 09 09:15:14 crc kubenswrapper[4786]: I1209 09:15:14.189757 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:15:14 crc kubenswrapper[4786]: E1209 09:15:14.192774 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:15:19 crc kubenswrapper[4786]: I1209 09:15:19.042259 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6af3-account-create-bx46s"] Dec 09 09:15:19 crc kubenswrapper[4786]: I1209 09:15:19.053978 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6af3-account-create-bx46s"] Dec 09 09:15:19 crc kubenswrapper[4786]: I1209 09:15:19.200881 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2c6015d-f922-417e-8b64-6d2544fd8d32" path="/var/lib/kubelet/pods/a2c6015d-f922-417e-8b64-6d2544fd8d32/volumes" Dec 09 09:15:25 crc kubenswrapper[4786]: I1209 09:15:25.038370 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-54njt"] Dec 09 09:15:25 crc kubenswrapper[4786]: I1209 09:15:25.050832 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-ndgjl"] Dec 09 09:15:25 crc kubenswrapper[4786]: I1209 09:15:25.060613 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-54njt"] Dec 09 09:15:25 crc kubenswrapper[4786]: I1209 09:15:25.070243 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-ndgjl"] Dec 09 09:15:25 crc kubenswrapper[4786]: I1209 09:15:25.202981 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cfed5d9-4bc8-4632-80b7-727dc329bcf9" path="/var/lib/kubelet/pods/3cfed5d9-4bc8-4632-80b7-727dc329bcf9/volumes" Dec 09 09:15:25 crc kubenswrapper[4786]: I1209 09:15:25.204585 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="816aabda-3e11-440c-89af-3cf36c86c997" path="/var/lib/kubelet/pods/816aabda-3e11-440c-89af-3cf36c86c997/volumes" Dec 09 09:15:25 crc kubenswrapper[4786]: I1209 09:15:25.529679 4786 generic.go:334] "Generic (PLEG): container finished" podID="ebb0da1f-f03a-4091-9057-2d250dd6bc07" containerID="036bbccac8f9bca29c2e27a3f0947992f11dffba30d5204ee50815bfe30df19b" exitCode=0 Dec 09 09:15:25 crc kubenswrapper[4786]: I1209 09:15:25.529749 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" event={"ID":"ebb0da1f-f03a-4091-9057-2d250dd6bc07","Type":"ContainerDied","Data":"036bbccac8f9bca29c2e27a3f0947992f11dffba30d5204ee50815bfe30df19b"} Dec 09 09:15:26 crc kubenswrapper[4786]: I1209 09:15:26.976932 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.065920 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-inventory\") pod \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.066098 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-bootstrap-combined-ca-bundle\") pod \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.066185 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rfr6\" (UniqueName: \"kubernetes.io/projected/ebb0da1f-f03a-4091-9057-2d250dd6bc07-kube-api-access-8rfr6\") pod \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.066218 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-ssh-key\") pod \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\" (UID: \"ebb0da1f-f03a-4091-9057-2d250dd6bc07\") " Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.071975 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "ebb0da1f-f03a-4091-9057-2d250dd6bc07" (UID: "ebb0da1f-f03a-4091-9057-2d250dd6bc07"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.072110 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebb0da1f-f03a-4091-9057-2d250dd6bc07-kube-api-access-8rfr6" (OuterVolumeSpecName: "kube-api-access-8rfr6") pod "ebb0da1f-f03a-4091-9057-2d250dd6bc07" (UID: "ebb0da1f-f03a-4091-9057-2d250dd6bc07"). InnerVolumeSpecName "kube-api-access-8rfr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.097193 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-inventory" (OuterVolumeSpecName: "inventory") pod "ebb0da1f-f03a-4091-9057-2d250dd6bc07" (UID: "ebb0da1f-f03a-4091-9057-2d250dd6bc07"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.098079 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ebb0da1f-f03a-4091-9057-2d250dd6bc07" (UID: "ebb0da1f-f03a-4091-9057-2d250dd6bc07"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.169565 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.169819 4786 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.169921 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rfr6\" (UniqueName: \"kubernetes.io/projected/ebb0da1f-f03a-4091-9057-2d250dd6bc07-kube-api-access-8rfr6\") on node \"crc\" DevicePath \"\"" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.170135 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ebb0da1f-f03a-4091-9057-2d250dd6bc07-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.553282 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" event={"ID":"ebb0da1f-f03a-4091-9057-2d250dd6bc07","Type":"ContainerDied","Data":"73601fd89ed990fedf93793daa5e170e5e8bdb4982c6b6fcecee16be62aed12f"} Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.553340 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73601fd89ed990fedf93793daa5e170e5e8bdb4982c6b6fcecee16be62aed12f" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.553463 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.686717 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm"] Dec 09 09:15:27 crc kubenswrapper[4786]: E1209 09:15:27.689650 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebb0da1f-f03a-4091-9057-2d250dd6bc07" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.689803 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebb0da1f-f03a-4091-9057-2d250dd6bc07" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 09:15:27 crc kubenswrapper[4786]: E1209 09:15:27.689907 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af2591ab-76a1-4771-b492-51ecf18d10c2" containerName="collect-profiles" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.689969 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="af2591ab-76a1-4771-b492-51ecf18d10c2" containerName="collect-profiles" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.690290 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebb0da1f-f03a-4091-9057-2d250dd6bc07" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.690391 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="af2591ab-76a1-4771-b492-51ecf18d10c2" containerName="collect-profiles" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.691626 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.696813 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.697312 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.697707 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.698200 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.710523 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm"] Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.812168 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.812300 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.812331 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6tks\" (UniqueName: \"kubernetes.io/projected/c31240e0-f612-4759-b933-3c2d89a10da3-kube-api-access-f6tks\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.915957 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.916403 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.917374 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6tks\" (UniqueName: \"kubernetes.io/projected/c31240e0-f612-4759-b933-3c2d89a10da3-kube-api-access-f6tks\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.922362 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.922396 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:27 crc kubenswrapper[4786]: I1209 09:15:27.936243 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6tks\" (UniqueName: \"kubernetes.io/projected/c31240e0-f612-4759-b933-3c2d89a10da3-kube-api-access-f6tks\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:28 crc kubenswrapper[4786]: I1209 09:15:28.051092 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:15:28 crc kubenswrapper[4786]: I1209 09:15:28.189233 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:15:28 crc kubenswrapper[4786]: E1209 09:15:28.189554 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:15:28 crc kubenswrapper[4786]: I1209 09:15:28.586256 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm"] Dec 09 09:15:29 crc kubenswrapper[4786]: I1209 09:15:29.572076 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" event={"ID":"c31240e0-f612-4759-b933-3c2d89a10da3","Type":"ContainerStarted","Data":"6382886ce41c34528ce3b6443bb198ca1cec66cb7c570e0150d928173a2483cd"} Dec 09 09:15:29 crc kubenswrapper[4786]: I1209 09:15:29.573241 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" event={"ID":"c31240e0-f612-4759-b933-3c2d89a10da3","Type":"ContainerStarted","Data":"1f4222ee124d76ef21f16f9f0e9737f316517dcf0797320d6964d25d46cbdb97"} Dec 09 09:15:29 crc kubenswrapper[4786]: I1209 09:15:29.602028 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" podStartSLOduration=2.049518942 podStartE2EDuration="2.602007354s" podCreationTimestamp="2025-12-09 09:15:27 +0000 UTC" firstStartedPulling="2025-12-09 09:15:28.592219045 +0000 UTC m=+1894.475840271" lastFinishedPulling="2025-12-09 09:15:29.144707457 +0000 UTC m=+1895.028328683" observedRunningTime="2025-12-09 09:15:29.589964094 +0000 UTC m=+1895.473585320" watchObservedRunningTime="2025-12-09 09:15:29.602007354 +0000 UTC m=+1895.485628580" Dec 09 09:15:33 crc kubenswrapper[4786]: I1209 09:15:33.052608 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-lmgzp"] Dec 09 09:15:33 crc kubenswrapper[4786]: I1209 09:15:33.063859 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-lmgzp"] Dec 09 09:15:33 crc kubenswrapper[4786]: I1209 09:15:33.203398 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da3d31ea-e187-4e92-9553-10fcc78ce65c" path="/var/lib/kubelet/pods/da3d31ea-e187-4e92-9553-10fcc78ce65c/volumes" Dec 09 09:15:39 crc kubenswrapper[4786]: I1209 09:15:39.061660 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-b672-account-create-lpjbb"] Dec 09 09:15:39 crc kubenswrapper[4786]: I1209 09:15:39.123508 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7011-account-create-8bwd5"] Dec 09 09:15:39 crc kubenswrapper[4786]: I1209 09:15:39.147858 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-b672-account-create-lpjbb"] Dec 09 09:15:39 crc kubenswrapper[4786]: I1209 09:15:39.171319 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7011-account-create-8bwd5"] Dec 09 09:15:39 crc kubenswrapper[4786]: I1209 09:15:39.208488 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40435517-5196-4dbf-bb22-534c3fcb374f" path="/var/lib/kubelet/pods/40435517-5196-4dbf-bb22-534c3fcb374f/volumes" Dec 09 09:15:39 crc kubenswrapper[4786]: I1209 09:15:39.209646 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e6e672a-2488-41e8-8236-982d18b86886" path="/var/lib/kubelet/pods/9e6e672a-2488-41e8-8236-982d18b86886/volumes" Dec 09 09:15:39 crc kubenswrapper[4786]: I1209 09:15:39.210376 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-f3f8-account-create-9pmhn"] Dec 09 09:15:39 crc kubenswrapper[4786]: I1209 09:15:39.216173 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-f3f8-account-create-9pmhn"] Dec 09 09:15:41 crc kubenswrapper[4786]: I1209 09:15:41.209736 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b61e55b-a701-42f7-b2c4-de979b971c9b" path="/var/lib/kubelet/pods/9b61e55b-a701-42f7-b2c4-de979b971c9b/volumes" Dec 09 09:15:42 crc kubenswrapper[4786]: I1209 09:15:42.189203 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:15:42 crc kubenswrapper[4786]: E1209 09:15:42.189616 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:15:54 crc kubenswrapper[4786]: I1209 09:15:54.045988 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-n2tp7"] Dec 09 09:15:54 crc kubenswrapper[4786]: I1209 09:15:54.063198 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-n2tp7"] Dec 09 09:15:54 crc kubenswrapper[4786]: I1209 09:15:54.194858 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:15:54 crc kubenswrapper[4786]: E1209 09:15:54.195222 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:15:55 crc kubenswrapper[4786]: I1209 09:15:55.203053 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67234e4f-66db-4e88-9da8-18841f39d886" path="/var/lib/kubelet/pods/67234e4f-66db-4e88-9da8-18841f39d886/volumes" Dec 09 09:15:59 crc kubenswrapper[4786]: I1209 09:15:59.070624 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-e5b7-account-create-4mwmr"] Dec 09 09:15:59 crc kubenswrapper[4786]: I1209 09:15:59.082838 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-e5b7-account-create-4mwmr"] Dec 09 09:15:59 crc kubenswrapper[4786]: I1209 09:15:59.203868 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e040fc5c-c169-4e21-9158-fee40fcb1f6e" path="/var/lib/kubelet/pods/e040fc5c-c169-4e21-9158-fee40fcb1f6e/volumes" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.024944 4786 scope.go:117] "RemoveContainer" containerID="4453fe889bd3b9ce911e0b36145312749b54e269a6f5f7685efbe3ecbafc23f5" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.056557 4786 scope.go:117] "RemoveContainer" containerID="67711bcaf37edb9c076df4e51de42f8d3c1bc3b83ffd81de6991808506e32e8f" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.109888 4786 scope.go:117] "RemoveContainer" containerID="97222087f56ad7df9af83f27397db11b72a8fae0f0474b3458204135a25c40f0" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.167055 4786 scope.go:117] "RemoveContainer" containerID="822247343e7df46c3f3dff718c838cb0c3fc2f3a020c137eab61f6b66d087e63" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.217614 4786 scope.go:117] "RemoveContainer" containerID="4c670c06e3d9510ec33202311d950769125eb9b70d3ad267d2e283127de5c33c" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.273509 4786 scope.go:117] "RemoveContainer" containerID="87bddfb1511d4583dcb2ca91339ed9dec648aa69b077d761fa9436c10bffe3a9" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.317392 4786 scope.go:117] "RemoveContainer" containerID="6fb596f8627a2c2d4fe871796ee91ee4f9e3b062f354291fcd015f66bd0b0bdd" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.350538 4786 scope.go:117] "RemoveContainer" containerID="81d583c8e9403335c9d7cfa2c9f39bed4ec771d58373955f06951c04dbf067bd" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.399220 4786 scope.go:117] "RemoveContainer" containerID="e16042c8da5be967c7d8fd868116ec939d2109a19e619a353ba8a6ae2a4d09e8" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.445120 4786 scope.go:117] "RemoveContainer" containerID="eecf2db073ed9371ff64d7ba74f2ca4c48472488f89d85c16888bd817638af7c" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.477371 4786 scope.go:117] "RemoveContainer" containerID="518343404b352c42c8f6c73f07f9cda18ab3af20c1b383b8d0163c5c31c4ae76" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.510013 4786 scope.go:117] "RemoveContainer" containerID="641bcab2263e2e18181f26b2f7370a0c53ff5f0cfc9aedb042eec7b63d9ba76b" Dec 09 09:16:02 crc kubenswrapper[4786]: I1209 09:16:02.536843 4786 scope.go:117] "RemoveContainer" containerID="2b7cd80ea41defb641d14baac19fb4e26904cbb3618d1e8606e8f6210b4843c2" Dec 09 09:16:09 crc kubenswrapper[4786]: I1209 09:16:09.188245 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:16:09 crc kubenswrapper[4786]: E1209 09:16:09.189123 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:16:22 crc kubenswrapper[4786]: I1209 09:16:22.188338 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:16:22 crc kubenswrapper[4786]: E1209 09:16:22.189512 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:16:33 crc kubenswrapper[4786]: I1209 09:16:33.188709 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:16:33 crc kubenswrapper[4786]: E1209 09:16:33.189731 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:16:38 crc kubenswrapper[4786]: I1209 09:16:38.053817 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-frjzv"] Dec 09 09:16:38 crc kubenswrapper[4786]: I1209 09:16:38.066146 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-frjzv"] Dec 09 09:16:39 crc kubenswrapper[4786]: I1209 09:16:39.201179 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36cfdfac-acea-4bea-95f3-7221ffc3d94b" path="/var/lib/kubelet/pods/36cfdfac-acea-4bea-95f3-7221ffc3d94b/volumes" Dec 09 09:16:45 crc kubenswrapper[4786]: I1209 09:16:45.194656 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:16:45 crc kubenswrapper[4786]: E1209 09:16:45.195422 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:16:56 crc kubenswrapper[4786]: I1209 09:16:56.187711 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:16:57 crc kubenswrapper[4786]: I1209 09:16:57.001951 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"fc569af6322bbbb8d4af41759fe3e9f8f89e3ce11ef3ade02346217ef0910593"} Dec 09 09:16:58 crc kubenswrapper[4786]: I1209 09:16:58.040983 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-jsl5j"] Dec 09 09:16:58 crc kubenswrapper[4786]: I1209 09:16:58.053340 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-w9zct"] Dec 09 09:16:58 crc kubenswrapper[4786]: I1209 09:16:58.064595 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-jsl5j"] Dec 09 09:16:58 crc kubenswrapper[4786]: I1209 09:16:58.075281 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-w9zct"] Dec 09 09:16:59 crc kubenswrapper[4786]: I1209 09:16:59.204222 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bcb4fa-81ad-4ec5-8168-efcb8da5ab61" path="/var/lib/kubelet/pods/39bcb4fa-81ad-4ec5-8168-efcb8da5ab61/volumes" Dec 09 09:16:59 crc kubenswrapper[4786]: I1209 09:16:59.206045 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3272921-6cce-4156-bed5-758d1a8a38f5" path="/var/lib/kubelet/pods/e3272921-6cce-4156-bed5-758d1a8a38f5/volumes" Dec 09 09:17:02 crc kubenswrapper[4786]: I1209 09:17:02.833636 4786 scope.go:117] "RemoveContainer" containerID="1499ce1ebbfcf214c8fe60281c634979c38386982715dfd174d486060d76bad5" Dec 09 09:17:02 crc kubenswrapper[4786]: I1209 09:17:02.873920 4786 scope.go:117] "RemoveContainer" containerID="3eb95625856726ad6f930e9ecbff06e8aee8033ad5525bd79883b1c025bff226" Dec 09 09:17:02 crc kubenswrapper[4786]: I1209 09:17:02.935368 4786 scope.go:117] "RemoveContainer" containerID="9adc39d9005af4ca652a8f746f3029e57881da15a65377308b58a4f5d65d568c" Dec 09 09:17:25 crc kubenswrapper[4786]: I1209 09:17:25.087895 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-9ddvk"] Dec 09 09:17:25 crc kubenswrapper[4786]: I1209 09:17:25.103557 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-9ddvk"] Dec 09 09:17:25 crc kubenswrapper[4786]: I1209 09:17:25.202124 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c28b549-bfff-47f7-b262-c3203bd88cb1" path="/var/lib/kubelet/pods/8c28b549-bfff-47f7-b262-c3203bd88cb1/volumes" Dec 09 09:17:32 crc kubenswrapper[4786]: I1209 09:17:32.404147 4786 generic.go:334] "Generic (PLEG): container finished" podID="c31240e0-f612-4759-b933-3c2d89a10da3" containerID="6382886ce41c34528ce3b6443bb198ca1cec66cb7c570e0150d928173a2483cd" exitCode=0 Dec 09 09:17:32 crc kubenswrapper[4786]: I1209 09:17:32.404240 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" event={"ID":"c31240e0-f612-4759-b933-3c2d89a10da3","Type":"ContainerDied","Data":"6382886ce41c34528ce3b6443bb198ca1cec66cb7c570e0150d928173a2483cd"} Dec 09 09:17:33 crc kubenswrapper[4786]: I1209 09:17:33.902488 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.017911 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-ssh-key\") pod \"c31240e0-f612-4759-b933-3c2d89a10da3\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.018082 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-inventory\") pod \"c31240e0-f612-4759-b933-3c2d89a10da3\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.018142 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6tks\" (UniqueName: \"kubernetes.io/projected/c31240e0-f612-4759-b933-3c2d89a10da3-kube-api-access-f6tks\") pod \"c31240e0-f612-4759-b933-3c2d89a10da3\" (UID: \"c31240e0-f612-4759-b933-3c2d89a10da3\") " Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.025420 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31240e0-f612-4759-b933-3c2d89a10da3-kube-api-access-f6tks" (OuterVolumeSpecName: "kube-api-access-f6tks") pod "c31240e0-f612-4759-b933-3c2d89a10da3" (UID: "c31240e0-f612-4759-b933-3c2d89a10da3"). InnerVolumeSpecName "kube-api-access-f6tks". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.058853 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c31240e0-f612-4759-b933-3c2d89a10da3" (UID: "c31240e0-f612-4759-b933-3c2d89a10da3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.063457 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-inventory" (OuterVolumeSpecName: "inventory") pod "c31240e0-f612-4759-b933-3c2d89a10da3" (UID: "c31240e0-f612-4759-b933-3c2d89a10da3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.121386 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.121430 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c31240e0-f612-4759-b933-3c2d89a10da3-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.121454 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6tks\" (UniqueName: \"kubernetes.io/projected/c31240e0-f612-4759-b933-3c2d89a10da3-kube-api-access-f6tks\") on node \"crc\" DevicePath \"\"" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.427748 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" event={"ID":"c31240e0-f612-4759-b933-3c2d89a10da3","Type":"ContainerDied","Data":"1f4222ee124d76ef21f16f9f0e9737f316517dcf0797320d6964d25d46cbdb97"} Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.428105 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f4222ee124d76ef21f16f9f0e9737f316517dcf0797320d6964d25d46cbdb97" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.427815 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.523748 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k"] Dec 09 09:17:34 crc kubenswrapper[4786]: E1209 09:17:34.524295 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c31240e0-f612-4759-b933-3c2d89a10da3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.524325 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c31240e0-f612-4759-b933-3c2d89a10da3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.524612 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c31240e0-f612-4759-b933-3c2d89a10da3" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.526489 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.529570 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.530288 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.530401 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.536968 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.551810 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k"] Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.633965 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tx29k\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.634042 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tx29k\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.634201 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5mbv\" (UniqueName: \"kubernetes.io/projected/e2ad540b-313c-4600-bf54-c14c9a6a2969-kube-api-access-p5mbv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tx29k\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.736088 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5mbv\" (UniqueName: \"kubernetes.io/projected/e2ad540b-313c-4600-bf54-c14c9a6a2969-kube-api-access-p5mbv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tx29k\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.736190 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tx29k\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.736231 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tx29k\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.741130 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tx29k\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.759571 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tx29k\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.761984 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5mbv\" (UniqueName: \"kubernetes.io/projected/e2ad540b-313c-4600-bf54-c14c9a6a2969-kube-api-access-p5mbv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-tx29k\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:34 crc kubenswrapper[4786]: I1209 09:17:34.852922 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:17:35 crc kubenswrapper[4786]: I1209 09:17:35.442751 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k"] Dec 09 09:17:35 crc kubenswrapper[4786]: I1209 09:17:35.448841 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:17:36 crc kubenswrapper[4786]: I1209 09:17:36.455072 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" event={"ID":"e2ad540b-313c-4600-bf54-c14c9a6a2969","Type":"ContainerStarted","Data":"86fee433762990fece7e6038d650fbecf8aec8c599ce7bdd660ac61ba3e1b46b"} Dec 09 09:17:36 crc kubenswrapper[4786]: I1209 09:17:36.455878 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" event={"ID":"e2ad540b-313c-4600-bf54-c14c9a6a2969","Type":"ContainerStarted","Data":"2210b319b91fc4e1cf5d785ea295ad00520e147aed7b9d997e1baf7e2b416461"} Dec 09 09:17:36 crc kubenswrapper[4786]: I1209 09:17:36.474709 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" podStartSLOduration=2.007752847 podStartE2EDuration="2.474686985s" podCreationTimestamp="2025-12-09 09:17:34 +0000 UTC" firstStartedPulling="2025-12-09 09:17:35.448118622 +0000 UTC m=+2021.331739848" lastFinishedPulling="2025-12-09 09:17:35.91505275 +0000 UTC m=+2021.798673986" observedRunningTime="2025-12-09 09:17:36.469869856 +0000 UTC m=+2022.353491112" watchObservedRunningTime="2025-12-09 09:17:36.474686985 +0000 UTC m=+2022.358308211" Dec 09 09:17:46 crc kubenswrapper[4786]: I1209 09:17:46.043368 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-hzbbl"] Dec 09 09:17:46 crc kubenswrapper[4786]: I1209 09:17:46.053848 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-hzbbl"] Dec 09 09:17:47 crc kubenswrapper[4786]: I1209 09:17:47.034548 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-6m994"] Dec 09 09:17:47 crc kubenswrapper[4786]: I1209 09:17:47.046169 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-6m994"] Dec 09 09:17:47 crc kubenswrapper[4786]: I1209 09:17:47.200310 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fe0b12b-06a5-45ae-8a51-073fb093cd54" path="/var/lib/kubelet/pods/1fe0b12b-06a5-45ae-8a51-073fb093cd54/volumes" Dec 09 09:17:47 crc kubenswrapper[4786]: I1209 09:17:47.201753 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c" path="/var/lib/kubelet/pods/4ee0616a-d77d-4a68-b0d4-6fe0f1a6df1c/volumes" Dec 09 09:17:52 crc kubenswrapper[4786]: I1209 09:17:52.035248 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mzjbq"] Dec 09 09:17:52 crc kubenswrapper[4786]: I1209 09:17:52.046180 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mzjbq"] Dec 09 09:17:53 crc kubenswrapper[4786]: I1209 09:17:53.029925 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-mxxjn"] Dec 09 09:17:53 crc kubenswrapper[4786]: I1209 09:17:53.041834 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bvc2g"] Dec 09 09:17:53 crc kubenswrapper[4786]: I1209 09:17:53.051305 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-mxxjn"] Dec 09 09:17:53 crc kubenswrapper[4786]: I1209 09:17:53.059199 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bvc2g"] Dec 09 09:17:53 crc kubenswrapper[4786]: I1209 09:17:53.200335 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6046a22f-fd23-407b-a9ae-0d02d4b41170" path="/var/lib/kubelet/pods/6046a22f-fd23-407b-a9ae-0d02d4b41170/volumes" Dec 09 09:17:53 crc kubenswrapper[4786]: I1209 09:17:53.201333 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87bf6602-f74b-49e8-874d-1ea9bbf6ec7b" path="/var/lib/kubelet/pods/87bf6602-f74b-49e8-874d-1ea9bbf6ec7b/volumes" Dec 09 09:17:53 crc kubenswrapper[4786]: I1209 09:17:53.202042 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f678c744-2a47-4f18-8e01-6f438a6e46e5" path="/var/lib/kubelet/pods/f678c744-2a47-4f18-8e01-6f438a6e46e5/volumes" Dec 09 09:18:03 crc kubenswrapper[4786]: I1209 09:18:03.114494 4786 scope.go:117] "RemoveContainer" containerID="3fca5a3c4b48fce183f2478c014be97407825de1a0b45566a66a5e88a41e032f" Dec 09 09:18:03 crc kubenswrapper[4786]: I1209 09:18:03.145925 4786 scope.go:117] "RemoveContainer" containerID="f9276b12c55a2e685ffdba9d25335ebeecbd4fe0e9092bd2d5e27acc02fe6a70" Dec 09 09:18:03 crc kubenswrapper[4786]: I1209 09:18:03.224449 4786 scope.go:117] "RemoveContainer" containerID="b688abd27ed49f3e9e0ed18ab7b5301653b7cb77b1cf9523c49e0e295b883d6d" Dec 09 09:18:03 crc kubenswrapper[4786]: I1209 09:18:03.249287 4786 scope.go:117] "RemoveContainer" containerID="078da1a366c6f949783c1769fdafd18365b94f3cca28caf9b0b6b300905306da" Dec 09 09:18:03 crc kubenswrapper[4786]: I1209 09:18:03.318805 4786 scope.go:117] "RemoveContainer" containerID="1376f5bcc4ba1485a13800089dca49bd3790af9966b24e34250fdedf59967039" Dec 09 09:18:03 crc kubenswrapper[4786]: I1209 09:18:03.365528 4786 scope.go:117] "RemoveContainer" containerID="d9a2cc35960764204adb4db837e952174a5c44528e376a8416466db2145ebe1c" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.047749 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-e5da-account-create-9krl4"] Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.059392 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-be56-account-create-w8ntw"] Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.070984 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cfbc-account-create-tmsln"] Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.079851 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cfbc-account-create-tmsln"] Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.088931 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-be56-account-create-w8ntw"] Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.102405 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-e5da-account-create-9krl4"] Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.210635 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0461c86e-7dbb-490a-b72b-cd19765acb95" path="/var/lib/kubelet/pods/0461c86e-7dbb-490a-b72b-cd19765acb95/volumes" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.211227 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7734cf63-1abe-49b9-a5f9-d67192a52bad" path="/var/lib/kubelet/pods/7734cf63-1abe-49b9-a5f9-d67192a52bad/volumes" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.211840 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9d4fc2b-79a8-426f-8540-f5187065225a" path="/var/lib/kubelet/pods/c9d4fc2b-79a8-426f-8540-f5187065225a/volumes" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.277903 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-f2cpl"] Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.280158 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.291186 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2cpl"] Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.470453 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fkt8\" (UniqueName: \"kubernetes.io/projected/cb5b640e-4d60-4654-b97c-15df67676fb8-kube-api-access-6fkt8\") pod \"redhat-operators-f2cpl\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.470524 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-catalog-content\") pod \"redhat-operators-f2cpl\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.470585 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-utilities\") pod \"redhat-operators-f2cpl\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.573586 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-catalog-content\") pod \"redhat-operators-f2cpl\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.573672 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-utilities\") pod \"redhat-operators-f2cpl\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.574025 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fkt8\" (UniqueName: \"kubernetes.io/projected/cb5b640e-4d60-4654-b97c-15df67676fb8-kube-api-access-6fkt8\") pod \"redhat-operators-f2cpl\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.574483 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-utilities\") pod \"redhat-operators-f2cpl\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.574512 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-catalog-content\") pod \"redhat-operators-f2cpl\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.607690 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fkt8\" (UniqueName: \"kubernetes.io/projected/cb5b640e-4d60-4654-b97c-15df67676fb8-kube-api-access-6fkt8\") pod \"redhat-operators-f2cpl\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:05 crc kubenswrapper[4786]: I1209 09:18:05.902915 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:06 crc kubenswrapper[4786]: I1209 09:18:06.295851 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-f2cpl"] Dec 09 09:18:06 crc kubenswrapper[4786]: I1209 09:18:06.763451 4786 generic.go:334] "Generic (PLEG): container finished" podID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerID="a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05" exitCode=0 Dec 09 09:18:06 crc kubenswrapper[4786]: I1209 09:18:06.763554 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2cpl" event={"ID":"cb5b640e-4d60-4654-b97c-15df67676fb8","Type":"ContainerDied","Data":"a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05"} Dec 09 09:18:06 crc kubenswrapper[4786]: I1209 09:18:06.763897 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2cpl" event={"ID":"cb5b640e-4d60-4654-b97c-15df67676fb8","Type":"ContainerStarted","Data":"41bc16961a60eb7db856457104945f6d5b3f73dc7c3d6b133c4f127fc350ab23"} Dec 09 09:18:08 crc kubenswrapper[4786]: I1209 09:18:08.788234 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2cpl" event={"ID":"cb5b640e-4d60-4654-b97c-15df67676fb8","Type":"ContainerStarted","Data":"46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889"} Dec 09 09:18:10 crc kubenswrapper[4786]: I1209 09:18:10.808729 4786 generic.go:334] "Generic (PLEG): container finished" podID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerID="46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889" exitCode=0 Dec 09 09:18:10 crc kubenswrapper[4786]: I1209 09:18:10.808802 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2cpl" event={"ID":"cb5b640e-4d60-4654-b97c-15df67676fb8","Type":"ContainerDied","Data":"46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889"} Dec 09 09:18:11 crc kubenswrapper[4786]: I1209 09:18:11.820362 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2cpl" event={"ID":"cb5b640e-4d60-4654-b97c-15df67676fb8","Type":"ContainerStarted","Data":"bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7"} Dec 09 09:18:11 crc kubenswrapper[4786]: I1209 09:18:11.848308 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-f2cpl" podStartSLOduration=2.263706167 podStartE2EDuration="6.84827027s" podCreationTimestamp="2025-12-09 09:18:05 +0000 UTC" firstStartedPulling="2025-12-09 09:18:06.765968603 +0000 UTC m=+2052.649589829" lastFinishedPulling="2025-12-09 09:18:11.350532706 +0000 UTC m=+2057.234153932" observedRunningTime="2025-12-09 09:18:11.841102102 +0000 UTC m=+2057.724723358" watchObservedRunningTime="2025-12-09 09:18:11.84827027 +0000 UTC m=+2057.731891496" Dec 09 09:18:15 crc kubenswrapper[4786]: I1209 09:18:15.903307 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:15 crc kubenswrapper[4786]: I1209 09:18:15.903994 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:16 crc kubenswrapper[4786]: I1209 09:18:16.949494 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-f2cpl" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerName="registry-server" probeResult="failure" output=< Dec 09 09:18:16 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 09:18:16 crc kubenswrapper[4786]: > Dec 09 09:18:25 crc kubenswrapper[4786]: I1209 09:18:25.953504 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:26 crc kubenswrapper[4786]: I1209 09:18:26.009001 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:26 crc kubenswrapper[4786]: I1209 09:18:26.198781 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f2cpl"] Dec 09 09:18:26 crc kubenswrapper[4786]: I1209 09:18:26.980293 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-f2cpl" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerName="registry-server" containerID="cri-o://bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7" gracePeriod=2 Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.555862 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.727110 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6fkt8\" (UniqueName: \"kubernetes.io/projected/cb5b640e-4d60-4654-b97c-15df67676fb8-kube-api-access-6fkt8\") pod \"cb5b640e-4d60-4654-b97c-15df67676fb8\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.727623 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-utilities\") pod \"cb5b640e-4d60-4654-b97c-15df67676fb8\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.727647 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-catalog-content\") pod \"cb5b640e-4d60-4654-b97c-15df67676fb8\" (UID: \"cb5b640e-4d60-4654-b97c-15df67676fb8\") " Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.728330 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-utilities" (OuterVolumeSpecName: "utilities") pod "cb5b640e-4d60-4654-b97c-15df67676fb8" (UID: "cb5b640e-4d60-4654-b97c-15df67676fb8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.732906 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb5b640e-4d60-4654-b97c-15df67676fb8-kube-api-access-6fkt8" (OuterVolumeSpecName: "kube-api-access-6fkt8") pod "cb5b640e-4d60-4654-b97c-15df67676fb8" (UID: "cb5b640e-4d60-4654-b97c-15df67676fb8"). InnerVolumeSpecName "kube-api-access-6fkt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.831350 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.831381 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6fkt8\" (UniqueName: \"kubernetes.io/projected/cb5b640e-4d60-4654-b97c-15df67676fb8-kube-api-access-6fkt8\") on node \"crc\" DevicePath \"\"" Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.844067 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cb5b640e-4d60-4654-b97c-15df67676fb8" (UID: "cb5b640e-4d60-4654-b97c-15df67676fb8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.932575 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cb5b640e-4d60-4654-b97c-15df67676fb8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.993761 4786 generic.go:334] "Generic (PLEG): container finished" podID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerID="bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7" exitCode=0 Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.993819 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2cpl" event={"ID":"cb5b640e-4d60-4654-b97c-15df67676fb8","Type":"ContainerDied","Data":"bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7"} Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.993847 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-f2cpl" Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.993871 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-f2cpl" event={"ID":"cb5b640e-4d60-4654-b97c-15df67676fb8","Type":"ContainerDied","Data":"41bc16961a60eb7db856457104945f6d5b3f73dc7c3d6b133c4f127fc350ab23"} Dec 09 09:18:27 crc kubenswrapper[4786]: I1209 09:18:27.993908 4786 scope.go:117] "RemoveContainer" containerID="bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7" Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.027206 4786 scope.go:117] "RemoveContainer" containerID="46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889" Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.028484 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-f2cpl"] Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.038400 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-f2cpl"] Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.050314 4786 scope.go:117] "RemoveContainer" containerID="a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05" Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.096458 4786 scope.go:117] "RemoveContainer" containerID="bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7" Dec 09 09:18:28 crc kubenswrapper[4786]: E1209 09:18:28.097067 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7\": container with ID starting with bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7 not found: ID does not exist" containerID="bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7" Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.097125 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7"} err="failed to get container status \"bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7\": rpc error: code = NotFound desc = could not find container \"bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7\": container with ID starting with bb331c9aa7b2560dd2e64af669dabfb4a897e423cf6078e539128067b8f65ae7 not found: ID does not exist" Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.097166 4786 scope.go:117] "RemoveContainer" containerID="46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889" Dec 09 09:18:28 crc kubenswrapper[4786]: E1209 09:18:28.097621 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889\": container with ID starting with 46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889 not found: ID does not exist" containerID="46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889" Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.097741 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889"} err="failed to get container status \"46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889\": rpc error: code = NotFound desc = could not find container \"46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889\": container with ID starting with 46f3ced617ab129f30500909dffb802e62be95b211a808432d7f271de685a889 not found: ID does not exist" Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.097823 4786 scope.go:117] "RemoveContainer" containerID="a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05" Dec 09 09:18:28 crc kubenswrapper[4786]: E1209 09:18:28.098276 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05\": container with ID starting with a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05 not found: ID does not exist" containerID="a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05" Dec 09 09:18:28 crc kubenswrapper[4786]: I1209 09:18:28.098373 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05"} err="failed to get container status \"a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05\": rpc error: code = NotFound desc = could not find container \"a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05\": container with ID starting with a9858aa72107337d02f9878be72f642718e93c0395ee2a983547b78969007e05 not found: ID does not exist" Dec 09 09:18:29 crc kubenswrapper[4786]: I1209 09:18:29.201450 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" path="/var/lib/kubelet/pods/cb5b640e-4d60-4654-b97c-15df67676fb8/volumes" Dec 09 09:18:46 crc kubenswrapper[4786]: I1209 09:18:46.044148 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q4p7n"] Dec 09 09:18:46 crc kubenswrapper[4786]: I1209 09:18:46.054390 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q4p7n"] Dec 09 09:18:47 crc kubenswrapper[4786]: I1209 09:18:47.207035 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5beb8b6-6d1a-45cd-94cf-81754c4db040" path="/var/lib/kubelet/pods/f5beb8b6-6d1a-45cd-94cf-81754c4db040/volumes" Dec 09 09:18:52 crc kubenswrapper[4786]: I1209 09:18:52.253498 4786 generic.go:334] "Generic (PLEG): container finished" podID="e2ad540b-313c-4600-bf54-c14c9a6a2969" containerID="86fee433762990fece7e6038d650fbecf8aec8c599ce7bdd660ac61ba3e1b46b" exitCode=0 Dec 09 09:18:52 crc kubenswrapper[4786]: I1209 09:18:52.253588 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" event={"ID":"e2ad540b-313c-4600-bf54-c14c9a6a2969","Type":"ContainerDied","Data":"86fee433762990fece7e6038d650fbecf8aec8c599ce7bdd660ac61ba3e1b46b"} Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.711129 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.801607 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-ssh-key\") pod \"e2ad540b-313c-4600-bf54-c14c9a6a2969\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.801856 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5mbv\" (UniqueName: \"kubernetes.io/projected/e2ad540b-313c-4600-bf54-c14c9a6a2969-kube-api-access-p5mbv\") pod \"e2ad540b-313c-4600-bf54-c14c9a6a2969\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.801935 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-inventory\") pod \"e2ad540b-313c-4600-bf54-c14c9a6a2969\" (UID: \"e2ad540b-313c-4600-bf54-c14c9a6a2969\") " Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.818669 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2ad540b-313c-4600-bf54-c14c9a6a2969-kube-api-access-p5mbv" (OuterVolumeSpecName: "kube-api-access-p5mbv") pod "e2ad540b-313c-4600-bf54-c14c9a6a2969" (UID: "e2ad540b-313c-4600-bf54-c14c9a6a2969"). InnerVolumeSpecName "kube-api-access-p5mbv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.852587 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e2ad540b-313c-4600-bf54-c14c9a6a2969" (UID: "e2ad540b-313c-4600-bf54-c14c9a6a2969"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.884721 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-inventory" (OuterVolumeSpecName: "inventory") pod "e2ad540b-313c-4600-bf54-c14c9a6a2969" (UID: "e2ad540b-313c-4600-bf54-c14c9a6a2969"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.912837 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5mbv\" (UniqueName: \"kubernetes.io/projected/e2ad540b-313c-4600-bf54-c14c9a6a2969-kube-api-access-p5mbv\") on node \"crc\" DevicePath \"\"" Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.912877 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:18:53 crc kubenswrapper[4786]: I1209 09:18:53.912886 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e2ad540b-313c-4600-bf54-c14c9a6a2969-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.281632 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" event={"ID":"e2ad540b-313c-4600-bf54-c14c9a6a2969","Type":"ContainerDied","Data":"2210b319b91fc4e1cf5d785ea295ad00520e147aed7b9d997e1baf7e2b416461"} Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.284642 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2210b319b91fc4e1cf5d785ea295ad00520e147aed7b9d997e1baf7e2b416461" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.281990 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-tx29k" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.381810 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg"] Dec 09 09:18:54 crc kubenswrapper[4786]: E1209 09:18:54.382471 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2ad540b-313c-4600-bf54-c14c9a6a2969" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.382497 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2ad540b-313c-4600-bf54-c14c9a6a2969" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 09 09:18:54 crc kubenswrapper[4786]: E1209 09:18:54.382521 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerName="registry-server" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.382531 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerName="registry-server" Dec 09 09:18:54 crc kubenswrapper[4786]: E1209 09:18:54.382549 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerName="extract-content" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.382557 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerName="extract-content" Dec 09 09:18:54 crc kubenswrapper[4786]: E1209 09:18:54.382593 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerName="extract-utilities" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.382604 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerName="extract-utilities" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.382885 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb5b640e-4d60-4654-b97c-15df67676fb8" containerName="registry-server" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.382916 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2ad540b-313c-4600-bf54-c14c9a6a2969" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.383895 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.387894 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.388262 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.388948 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.390329 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.394007 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg"] Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.529142 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.529246 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.529729 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvqd5\" (UniqueName: \"kubernetes.io/projected/32d7a244-874b-45f3-844a-402e668af86d-kube-api-access-tvqd5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.632562 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.632711 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvqd5\" (UniqueName: \"kubernetes.io/projected/32d7a244-874b-45f3-844a-402e668af86d-kube-api-access-tvqd5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.632857 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.637348 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.638240 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.651650 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvqd5\" (UniqueName: \"kubernetes.io/projected/32d7a244-874b-45f3-844a-402e668af86d-kube-api-access-tvqd5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:54 crc kubenswrapper[4786]: I1209 09:18:54.710454 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:18:55 crc kubenswrapper[4786]: I1209 09:18:55.256156 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg"] Dec 09 09:18:55 crc kubenswrapper[4786]: I1209 09:18:55.293315 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" event={"ID":"32d7a244-874b-45f3-844a-402e668af86d","Type":"ContainerStarted","Data":"b34fa5b45c70bc1efd01848eafe8a439053f0359ebe168cfc8698a5bc5a71e76"} Dec 09 09:18:56 crc kubenswrapper[4786]: I1209 09:18:56.815835 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:18:57 crc kubenswrapper[4786]: I1209 09:18:57.323711 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" event={"ID":"32d7a244-874b-45f3-844a-402e668af86d","Type":"ContainerStarted","Data":"c820d6c8ce5b9b2b8ccf56473074e852ceb702ed1868d0288290db7c6266bd10"} Dec 09 09:18:57 crc kubenswrapper[4786]: I1209 09:18:57.353507 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" podStartSLOduration=1.801156496 podStartE2EDuration="3.353479971s" podCreationTimestamp="2025-12-09 09:18:54 +0000 UTC" firstStartedPulling="2025-12-09 09:18:55.258093434 +0000 UTC m=+2101.141714650" lastFinishedPulling="2025-12-09 09:18:56.810416899 +0000 UTC m=+2102.694038125" observedRunningTime="2025-12-09 09:18:57.342936598 +0000 UTC m=+2103.226557824" watchObservedRunningTime="2025-12-09 09:18:57.353479971 +0000 UTC m=+2103.237101207" Dec 09 09:19:02 crc kubenswrapper[4786]: I1209 09:19:02.376581 4786 generic.go:334] "Generic (PLEG): container finished" podID="32d7a244-874b-45f3-844a-402e668af86d" containerID="c820d6c8ce5b9b2b8ccf56473074e852ceb702ed1868d0288290db7c6266bd10" exitCode=0 Dec 09 09:19:02 crc kubenswrapper[4786]: I1209 09:19:02.376665 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" event={"ID":"32d7a244-874b-45f3-844a-402e668af86d","Type":"ContainerDied","Data":"c820d6c8ce5b9b2b8ccf56473074e852ceb702ed1868d0288290db7c6266bd10"} Dec 09 09:19:03 crc kubenswrapper[4786]: I1209 09:19:03.516234 4786 scope.go:117] "RemoveContainer" containerID="77f883f7ee2b8202a0f5cb4617338a919461146cafed5b328914116baf68b604" Dec 09 09:19:03 crc kubenswrapper[4786]: I1209 09:19:03.565501 4786 scope.go:117] "RemoveContainer" containerID="af9f0903c3b496ebeae722c9cdd02088c81456be610a0156dec9a49210679cbe" Dec 09 09:19:03 crc kubenswrapper[4786]: I1209 09:19:03.679699 4786 scope.go:117] "RemoveContainer" containerID="2443ef687656a794f9c1575acbd80dea1ee787245cffce4652c8184096da4c87" Dec 09 09:19:03 crc kubenswrapper[4786]: I1209 09:19:03.722795 4786 scope.go:117] "RemoveContainer" containerID="d0e5af582f2527fa44cbc360beb6e1ed261a97e02adde2d5bb32a39990a0d1af" Dec 09 09:19:03 crc kubenswrapper[4786]: I1209 09:19:03.935602 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.068118 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvqd5\" (UniqueName: \"kubernetes.io/projected/32d7a244-874b-45f3-844a-402e668af86d-kube-api-access-tvqd5\") pod \"32d7a244-874b-45f3-844a-402e668af86d\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.068242 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-ssh-key\") pod \"32d7a244-874b-45f3-844a-402e668af86d\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.068310 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-inventory\") pod \"32d7a244-874b-45f3-844a-402e668af86d\" (UID: \"32d7a244-874b-45f3-844a-402e668af86d\") " Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.075413 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32d7a244-874b-45f3-844a-402e668af86d-kube-api-access-tvqd5" (OuterVolumeSpecName: "kube-api-access-tvqd5") pod "32d7a244-874b-45f3-844a-402e668af86d" (UID: "32d7a244-874b-45f3-844a-402e668af86d"). InnerVolumeSpecName "kube-api-access-tvqd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.096401 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-inventory" (OuterVolumeSpecName: "inventory") pod "32d7a244-874b-45f3-844a-402e668af86d" (UID: "32d7a244-874b-45f3-844a-402e668af86d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.103656 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "32d7a244-874b-45f3-844a-402e668af86d" (UID: "32d7a244-874b-45f3-844a-402e668af86d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.171214 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvqd5\" (UniqueName: \"kubernetes.io/projected/32d7a244-874b-45f3-844a-402e668af86d-kube-api-access-tvqd5\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.171256 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.171271 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/32d7a244-874b-45f3-844a-402e668af86d-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.407011 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" event={"ID":"32d7a244-874b-45f3-844a-402e668af86d","Type":"ContainerDied","Data":"b34fa5b45c70bc1efd01848eafe8a439053f0359ebe168cfc8698a5bc5a71e76"} Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.407063 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b34fa5b45c70bc1efd01848eafe8a439053f0359ebe168cfc8698a5bc5a71e76" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.407143 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.493596 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7"] Dec 09 09:19:04 crc kubenswrapper[4786]: E1209 09:19:04.494551 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32d7a244-874b-45f3-844a-402e668af86d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.494583 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="32d7a244-874b-45f3-844a-402e668af86d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.495076 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="32d7a244-874b-45f3-844a-402e668af86d" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.496304 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.501629 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.501845 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.502445 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.503083 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.508667 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7"] Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.685758 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6bps\" (UniqueName: \"kubernetes.io/projected/d67c7c4e-faa8-427e-953b-829c4e277994-kube-api-access-j6bps\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bnlh7\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.686116 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bnlh7\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.686156 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bnlh7\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.788601 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6bps\" (UniqueName: \"kubernetes.io/projected/d67c7c4e-faa8-427e-953b-829c4e277994-kube-api-access-j6bps\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bnlh7\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.788710 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bnlh7\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.788771 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bnlh7\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.794469 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bnlh7\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.794532 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bnlh7\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.812948 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6bps\" (UniqueName: \"kubernetes.io/projected/d67c7c4e-faa8-427e-953b-829c4e277994-kube-api-access-j6bps\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-bnlh7\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:04 crc kubenswrapper[4786]: I1209 09:19:04.841648 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:05 crc kubenswrapper[4786]: I1209 09:19:05.460402 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7"] Dec 09 09:19:06 crc kubenswrapper[4786]: I1209 09:19:06.429693 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" event={"ID":"d67c7c4e-faa8-427e-953b-829c4e277994","Type":"ContainerStarted","Data":"383e77d8c9d33f190bbca8ed09bd707c1ac175601101096ad05110ae04bb4460"} Dec 09 09:19:06 crc kubenswrapper[4786]: I1209 09:19:06.430084 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" event={"ID":"d67c7c4e-faa8-427e-953b-829c4e277994","Type":"ContainerStarted","Data":"fd5314c193599be91b02d6c5693a6d1d2a9be52fbbfb642dd22b853f492d3674"} Dec 09 09:19:06 crc kubenswrapper[4786]: I1209 09:19:06.451905 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" podStartSLOduration=2.03884696 podStartE2EDuration="2.451885057s" podCreationTimestamp="2025-12-09 09:19:04 +0000 UTC" firstStartedPulling="2025-12-09 09:19:05.46134786 +0000 UTC m=+2111.344969086" lastFinishedPulling="2025-12-09 09:19:05.874385947 +0000 UTC m=+2111.758007183" observedRunningTime="2025-12-09 09:19:06.449293353 +0000 UTC m=+2112.332914569" watchObservedRunningTime="2025-12-09 09:19:06.451885057 +0000 UTC m=+2112.335506283" Dec 09 09:19:06 crc kubenswrapper[4786]: I1209 09:19:06.996656 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-626m4"] Dec 09 09:19:06 crc kubenswrapper[4786]: I1209 09:19:06.998866 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.020096 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-626m4"] Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.150580 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-utilities\") pod \"redhat-marketplace-626m4\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.150652 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtl2v\" (UniqueName: \"kubernetes.io/projected/2a321328-f189-468a-b704-8bdeaedf1535-kube-api-access-vtl2v\") pod \"redhat-marketplace-626m4\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.150776 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-catalog-content\") pod \"redhat-marketplace-626m4\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.253937 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-utilities\") pod \"redhat-marketplace-626m4\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.254550 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtl2v\" (UniqueName: \"kubernetes.io/projected/2a321328-f189-468a-b704-8bdeaedf1535-kube-api-access-vtl2v\") pod \"redhat-marketplace-626m4\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.254558 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-utilities\") pod \"redhat-marketplace-626m4\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.254595 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-catalog-content\") pod \"redhat-marketplace-626m4\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.255144 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-catalog-content\") pod \"redhat-marketplace-626m4\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.310928 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtl2v\" (UniqueName: \"kubernetes.io/projected/2a321328-f189-468a-b704-8bdeaedf1535-kube-api-access-vtl2v\") pod \"redhat-marketplace-626m4\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.322057 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:07 crc kubenswrapper[4786]: I1209 09:19:07.864982 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-626m4"] Dec 09 09:19:08 crc kubenswrapper[4786]: I1209 09:19:08.472819 4786 generic.go:334] "Generic (PLEG): container finished" podID="2a321328-f189-468a-b704-8bdeaedf1535" containerID="8e4137265a418de7dcc09ebb0e93923d933c421eab87677a71506f63ad3e6609" exitCode=0 Dec 09 09:19:08 crc kubenswrapper[4786]: I1209 09:19:08.472924 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-626m4" event={"ID":"2a321328-f189-468a-b704-8bdeaedf1535","Type":"ContainerDied","Data":"8e4137265a418de7dcc09ebb0e93923d933c421eab87677a71506f63ad3e6609"} Dec 09 09:19:08 crc kubenswrapper[4786]: I1209 09:19:08.473204 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-626m4" event={"ID":"2a321328-f189-468a-b704-8bdeaedf1535","Type":"ContainerStarted","Data":"3e3a47075378f0c6890cf09d47e0dd8bebbf904e1ba88731e4341fcbbe866c08"} Dec 09 09:19:09 crc kubenswrapper[4786]: I1209 09:19:09.484411 4786 generic.go:334] "Generic (PLEG): container finished" podID="2a321328-f189-468a-b704-8bdeaedf1535" containerID="3ceb19712a62faa0cb5962e977a43a9290b4243667c5a1aa8d3d0379ebbc9bc1" exitCode=0 Dec 09 09:19:09 crc kubenswrapper[4786]: I1209 09:19:09.484562 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-626m4" event={"ID":"2a321328-f189-468a-b704-8bdeaedf1535","Type":"ContainerDied","Data":"3ceb19712a62faa0cb5962e977a43a9290b4243667c5a1aa8d3d0379ebbc9bc1"} Dec 09 09:19:10 crc kubenswrapper[4786]: I1209 09:19:10.499009 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-626m4" event={"ID":"2a321328-f189-468a-b704-8bdeaedf1535","Type":"ContainerStarted","Data":"3f0f22b2f4b70b70b7e1facaa8ef4b9bcbe775df0f77ff21cd3c387e4873cd4f"} Dec 09 09:19:10 crc kubenswrapper[4786]: I1209 09:19:10.534013 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-626m4" podStartSLOduration=3.15356827 podStartE2EDuration="4.533988868s" podCreationTimestamp="2025-12-09 09:19:06 +0000 UTC" firstStartedPulling="2025-12-09 09:19:08.474990996 +0000 UTC m=+2114.358612212" lastFinishedPulling="2025-12-09 09:19:09.855411584 +0000 UTC m=+2115.739032810" observedRunningTime="2025-12-09 09:19:10.526754338 +0000 UTC m=+2116.410375614" watchObservedRunningTime="2025-12-09 09:19:10.533988868 +0000 UTC m=+2116.417610114" Dec 09 09:19:11 crc kubenswrapper[4786]: I1209 09:19:11.043771 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-jbbsf"] Dec 09 09:19:11 crc kubenswrapper[4786]: I1209 09:19:11.056443 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-jbbsf"] Dec 09 09:19:11 crc kubenswrapper[4786]: I1209 09:19:11.205277 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb69b433-50fe-4d72-8cba-96a73e6cc10d" path="/var/lib/kubelet/pods/eb69b433-50fe-4d72-8cba-96a73e6cc10d/volumes" Dec 09 09:19:17 crc kubenswrapper[4786]: I1209 09:19:17.323499 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:17 crc kubenswrapper[4786]: I1209 09:19:17.325599 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:17 crc kubenswrapper[4786]: I1209 09:19:17.392347 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:17 crc kubenswrapper[4786]: I1209 09:19:17.643508 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:17 crc kubenswrapper[4786]: I1209 09:19:17.703326 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-626m4"] Dec 09 09:19:19 crc kubenswrapper[4786]: I1209 09:19:19.618829 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-626m4" podUID="2a321328-f189-468a-b704-8bdeaedf1535" containerName="registry-server" containerID="cri-o://3f0f22b2f4b70b70b7e1facaa8ef4b9bcbe775df0f77ff21cd3c387e4873cd4f" gracePeriod=2 Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.630579 4786 generic.go:334] "Generic (PLEG): container finished" podID="2a321328-f189-468a-b704-8bdeaedf1535" containerID="3f0f22b2f4b70b70b7e1facaa8ef4b9bcbe775df0f77ff21cd3c387e4873cd4f" exitCode=0 Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.631007 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-626m4" event={"ID":"2a321328-f189-468a-b704-8bdeaedf1535","Type":"ContainerDied","Data":"3f0f22b2f4b70b70b7e1facaa8ef4b9bcbe775df0f77ff21cd3c387e4873cd4f"} Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.632526 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-626m4" event={"ID":"2a321328-f189-468a-b704-8bdeaedf1535","Type":"ContainerDied","Data":"3e3a47075378f0c6890cf09d47e0dd8bebbf904e1ba88731e4341fcbbe866c08"} Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.632593 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e3a47075378f0c6890cf09d47e0dd8bebbf904e1ba88731e4341fcbbe866c08" Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.655659 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.807943 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtl2v\" (UniqueName: \"kubernetes.io/projected/2a321328-f189-468a-b704-8bdeaedf1535-kube-api-access-vtl2v\") pod \"2a321328-f189-468a-b704-8bdeaedf1535\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.808056 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-catalog-content\") pod \"2a321328-f189-468a-b704-8bdeaedf1535\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.808196 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-utilities\") pod \"2a321328-f189-468a-b704-8bdeaedf1535\" (UID: \"2a321328-f189-468a-b704-8bdeaedf1535\") " Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.808927 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-utilities" (OuterVolumeSpecName: "utilities") pod "2a321328-f189-468a-b704-8bdeaedf1535" (UID: "2a321328-f189-468a-b704-8bdeaedf1535"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.813914 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a321328-f189-468a-b704-8bdeaedf1535-kube-api-access-vtl2v" (OuterVolumeSpecName: "kube-api-access-vtl2v") pod "2a321328-f189-468a-b704-8bdeaedf1535" (UID: "2a321328-f189-468a-b704-8bdeaedf1535"). InnerVolumeSpecName "kube-api-access-vtl2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.826678 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2a321328-f189-468a-b704-8bdeaedf1535" (UID: "2a321328-f189-468a-b704-8bdeaedf1535"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.911099 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtl2v\" (UniqueName: \"kubernetes.io/projected/2a321328-f189-468a-b704-8bdeaedf1535-kube-api-access-vtl2v\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.911147 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:20 crc kubenswrapper[4786]: I1209 09:19:20.911158 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2a321328-f189-468a-b704-8bdeaedf1535-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:21 crc kubenswrapper[4786]: I1209 09:19:21.638918 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-626m4" Dec 09 09:19:21 crc kubenswrapper[4786]: I1209 09:19:21.664340 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-626m4"] Dec 09 09:19:21 crc kubenswrapper[4786]: I1209 09:19:21.674872 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-626m4"] Dec 09 09:19:23 crc kubenswrapper[4786]: I1209 09:19:23.206583 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a321328-f189-468a-b704-8bdeaedf1535" path="/var/lib/kubelet/pods/2a321328-f189-468a-b704-8bdeaedf1535/volumes" Dec 09 09:19:24 crc kubenswrapper[4786]: I1209 09:19:24.988639 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:19:24 crc kubenswrapper[4786]: I1209 09:19:24.988930 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:19:26 crc kubenswrapper[4786]: I1209 09:19:26.033391 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-glsgk"] Dec 09 09:19:26 crc kubenswrapper[4786]: I1209 09:19:26.045266 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-glsgk"] Dec 09 09:19:27 crc kubenswrapper[4786]: I1209 09:19:27.203741 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b2cc8bb-e766-4324-afcb-8ad0655bd96f" path="/var/lib/kubelet/pods/8b2cc8bb-e766-4324-afcb-8ad0655bd96f/volumes" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.092182 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-q5rvn"] Dec 09 09:19:29 crc kubenswrapper[4786]: E1209 09:19:29.093137 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a321328-f189-468a-b704-8bdeaedf1535" containerName="extract-content" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.093155 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a321328-f189-468a-b704-8bdeaedf1535" containerName="extract-content" Dec 09 09:19:29 crc kubenswrapper[4786]: E1209 09:19:29.093174 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a321328-f189-468a-b704-8bdeaedf1535" containerName="registry-server" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.093180 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a321328-f189-468a-b704-8bdeaedf1535" containerName="registry-server" Dec 09 09:19:29 crc kubenswrapper[4786]: E1209 09:19:29.093200 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a321328-f189-468a-b704-8bdeaedf1535" containerName="extract-utilities" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.093207 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a321328-f189-468a-b704-8bdeaedf1535" containerName="extract-utilities" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.093439 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a321328-f189-468a-b704-8bdeaedf1535" containerName="registry-server" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.095202 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.126459 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5rvn"] Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.281671 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-utilities\") pod \"community-operators-q5rvn\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.282111 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7ljs\" (UniqueName: \"kubernetes.io/projected/7355067a-3964-424b-8a32-7c12a747e12b-kube-api-access-l7ljs\") pod \"community-operators-q5rvn\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.282153 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-catalog-content\") pod \"community-operators-q5rvn\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.384083 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-catalog-content\") pod \"community-operators-q5rvn\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.384447 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-utilities\") pod \"community-operators-q5rvn\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.384560 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7ljs\" (UniqueName: \"kubernetes.io/projected/7355067a-3964-424b-8a32-7c12a747e12b-kube-api-access-l7ljs\") pod \"community-operators-q5rvn\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.385837 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-utilities\") pod \"community-operators-q5rvn\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.386660 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-catalog-content\") pod \"community-operators-q5rvn\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.415655 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7ljs\" (UniqueName: \"kubernetes.io/projected/7355067a-3964-424b-8a32-7c12a747e12b-kube-api-access-l7ljs\") pod \"community-operators-q5rvn\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:29 crc kubenswrapper[4786]: I1209 09:19:29.425021 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:30 crc kubenswrapper[4786]: I1209 09:19:30.056670 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-q5rvn"] Dec 09 09:19:30 crc kubenswrapper[4786]: I1209 09:19:30.739852 4786 generic.go:334] "Generic (PLEG): container finished" podID="7355067a-3964-424b-8a32-7c12a747e12b" containerID="645db5797edfc4875d9391897124cd8f123215d75e14b7b042c26771bb9fc113" exitCode=0 Dec 09 09:19:30 crc kubenswrapper[4786]: I1209 09:19:30.739938 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5rvn" event={"ID":"7355067a-3964-424b-8a32-7c12a747e12b","Type":"ContainerDied","Data":"645db5797edfc4875d9391897124cd8f123215d75e14b7b042c26771bb9fc113"} Dec 09 09:19:30 crc kubenswrapper[4786]: I1209 09:19:30.740212 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5rvn" event={"ID":"7355067a-3964-424b-8a32-7c12a747e12b","Type":"ContainerStarted","Data":"fef47292cbbb5c4f6f58e19cd09ce0e8a41bb9ed0c4f5bce4e928c7f20fa5f49"} Dec 09 09:19:31 crc kubenswrapper[4786]: I1209 09:19:31.752655 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5rvn" event={"ID":"7355067a-3964-424b-8a32-7c12a747e12b","Type":"ContainerStarted","Data":"9a1cc0974e24696a95bdf9dbdb6513a5da2b66b00c2dc91383bbce928402fb59"} Dec 09 09:19:32 crc kubenswrapper[4786]: I1209 09:19:32.765795 4786 generic.go:334] "Generic (PLEG): container finished" podID="7355067a-3964-424b-8a32-7c12a747e12b" containerID="9a1cc0974e24696a95bdf9dbdb6513a5da2b66b00c2dc91383bbce928402fb59" exitCode=0 Dec 09 09:19:32 crc kubenswrapper[4786]: I1209 09:19:32.765855 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5rvn" event={"ID":"7355067a-3964-424b-8a32-7c12a747e12b","Type":"ContainerDied","Data":"9a1cc0974e24696a95bdf9dbdb6513a5da2b66b00c2dc91383bbce928402fb59"} Dec 09 09:19:34 crc kubenswrapper[4786]: I1209 09:19:34.806754 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5rvn" event={"ID":"7355067a-3964-424b-8a32-7c12a747e12b","Type":"ContainerStarted","Data":"8721a7a62116d508242342ee06e01b93e7929d743af094a312dc6f49b0ff4cd4"} Dec 09 09:19:34 crc kubenswrapper[4786]: I1209 09:19:34.830932 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-q5rvn" podStartSLOduration=2.753116593 podStartE2EDuration="5.830909965s" podCreationTimestamp="2025-12-09 09:19:29 +0000 UTC" firstStartedPulling="2025-12-09 09:19:30.743221845 +0000 UTC m=+2136.626843071" lastFinishedPulling="2025-12-09 09:19:33.821015217 +0000 UTC m=+2139.704636443" observedRunningTime="2025-12-09 09:19:34.827179922 +0000 UTC m=+2140.710801158" watchObservedRunningTime="2025-12-09 09:19:34.830909965 +0000 UTC m=+2140.714531191" Dec 09 09:19:39 crc kubenswrapper[4786]: I1209 09:19:39.425653 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:39 crc kubenswrapper[4786]: I1209 09:19:39.426359 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:39 crc kubenswrapper[4786]: I1209 09:19:39.509835 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:39 crc kubenswrapper[4786]: I1209 09:19:39.900458 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:39 crc kubenswrapper[4786]: I1209 09:19:39.965071 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5rvn"] Dec 09 09:19:41 crc kubenswrapper[4786]: I1209 09:19:41.997182 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-q5rvn" podUID="7355067a-3964-424b-8a32-7c12a747e12b" containerName="registry-server" containerID="cri-o://8721a7a62116d508242342ee06e01b93e7929d743af094a312dc6f49b0ff4cd4" gracePeriod=2 Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.011876 4786 generic.go:334] "Generic (PLEG): container finished" podID="7355067a-3964-424b-8a32-7c12a747e12b" containerID="8721a7a62116d508242342ee06e01b93e7929d743af094a312dc6f49b0ff4cd4" exitCode=0 Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.012064 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5rvn" event={"ID":"7355067a-3964-424b-8a32-7c12a747e12b","Type":"ContainerDied","Data":"8721a7a62116d508242342ee06e01b93e7929d743af094a312dc6f49b0ff4cd4"} Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.012251 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-q5rvn" event={"ID":"7355067a-3964-424b-8a32-7c12a747e12b","Type":"ContainerDied","Data":"fef47292cbbb5c4f6f58e19cd09ce0e8a41bb9ed0c4f5bce4e928c7f20fa5f49"} Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.012268 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fef47292cbbb5c4f6f58e19cd09ce0e8a41bb9ed0c4f5bce4e928c7f20fa5f49" Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.054121 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.126201 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-catalog-content\") pod \"7355067a-3964-424b-8a32-7c12a747e12b\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.126312 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7ljs\" (UniqueName: \"kubernetes.io/projected/7355067a-3964-424b-8a32-7c12a747e12b-kube-api-access-l7ljs\") pod \"7355067a-3964-424b-8a32-7c12a747e12b\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.126463 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-utilities\") pod \"7355067a-3964-424b-8a32-7c12a747e12b\" (UID: \"7355067a-3964-424b-8a32-7c12a747e12b\") " Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.127812 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-utilities" (OuterVolumeSpecName: "utilities") pod "7355067a-3964-424b-8a32-7c12a747e12b" (UID: "7355067a-3964-424b-8a32-7c12a747e12b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.135641 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7355067a-3964-424b-8a32-7c12a747e12b-kube-api-access-l7ljs" (OuterVolumeSpecName: "kube-api-access-l7ljs") pod "7355067a-3964-424b-8a32-7c12a747e12b" (UID: "7355067a-3964-424b-8a32-7c12a747e12b"). InnerVolumeSpecName "kube-api-access-l7ljs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.178438 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7355067a-3964-424b-8a32-7c12a747e12b" (UID: "7355067a-3964-424b-8a32-7c12a747e12b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.229671 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.229717 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7355067a-3964-424b-8a32-7c12a747e12b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:43 crc kubenswrapper[4786]: I1209 09:19:43.229736 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7ljs\" (UniqueName: \"kubernetes.io/projected/7355067a-3964-424b-8a32-7c12a747e12b-kube-api-access-l7ljs\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:44 crc kubenswrapper[4786]: I1209 09:19:44.029030 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-q5rvn" Dec 09 09:19:44 crc kubenswrapper[4786]: I1209 09:19:44.090518 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-q5rvn"] Dec 09 09:19:44 crc kubenswrapper[4786]: I1209 09:19:44.101655 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-q5rvn"] Dec 09 09:19:45 crc kubenswrapper[4786]: I1209 09:19:45.202716 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7355067a-3964-424b-8a32-7c12a747e12b" path="/var/lib/kubelet/pods/7355067a-3964-424b-8a32-7c12a747e12b/volumes" Dec 09 09:19:47 crc kubenswrapper[4786]: I1209 09:19:47.063635 4786 generic.go:334] "Generic (PLEG): container finished" podID="d67c7c4e-faa8-427e-953b-829c4e277994" containerID="383e77d8c9d33f190bbca8ed09bd707c1ac175601101096ad05110ae04bb4460" exitCode=0 Dec 09 09:19:47 crc kubenswrapper[4786]: I1209 09:19:47.063741 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" event={"ID":"d67c7c4e-faa8-427e-953b-829c4e277994","Type":"ContainerDied","Data":"383e77d8c9d33f190bbca8ed09bd707c1ac175601101096ad05110ae04bb4460"} Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.562508 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.736560 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-inventory\") pod \"d67c7c4e-faa8-427e-953b-829c4e277994\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.736624 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-ssh-key\") pod \"d67c7c4e-faa8-427e-953b-829c4e277994\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.736653 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6bps\" (UniqueName: \"kubernetes.io/projected/d67c7c4e-faa8-427e-953b-829c4e277994-kube-api-access-j6bps\") pod \"d67c7c4e-faa8-427e-953b-829c4e277994\" (UID: \"d67c7c4e-faa8-427e-953b-829c4e277994\") " Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.745744 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d67c7c4e-faa8-427e-953b-829c4e277994-kube-api-access-j6bps" (OuterVolumeSpecName: "kube-api-access-j6bps") pod "d67c7c4e-faa8-427e-953b-829c4e277994" (UID: "d67c7c4e-faa8-427e-953b-829c4e277994"). InnerVolumeSpecName "kube-api-access-j6bps". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.769676 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d67c7c4e-faa8-427e-953b-829c4e277994" (UID: "d67c7c4e-faa8-427e-953b-829c4e277994"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.772619 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-inventory" (OuterVolumeSpecName: "inventory") pod "d67c7c4e-faa8-427e-953b-829c4e277994" (UID: "d67c7c4e-faa8-427e-953b-829c4e277994"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.839286 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.839326 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d67c7c4e-faa8-427e-953b-829c4e277994-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:48 crc kubenswrapper[4786]: I1209 09:19:48.839341 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6bps\" (UniqueName: \"kubernetes.io/projected/d67c7c4e-faa8-427e-953b-829c4e277994-kube-api-access-j6bps\") on node \"crc\" DevicePath \"\"" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.120224 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" event={"ID":"d67c7c4e-faa8-427e-953b-829c4e277994","Type":"ContainerDied","Data":"fd5314c193599be91b02d6c5693a6d1d2a9be52fbbfb642dd22b853f492d3674"} Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.120286 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd5314c193599be91b02d6c5693a6d1d2a9be52fbbfb642dd22b853f492d3674" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.120782 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-bnlh7" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.214586 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r"] Dec 09 09:19:49 crc kubenswrapper[4786]: E1209 09:19:49.217786 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7355067a-3964-424b-8a32-7c12a747e12b" containerName="extract-content" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.217915 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7355067a-3964-424b-8a32-7c12a747e12b" containerName="extract-content" Dec 09 09:19:49 crc kubenswrapper[4786]: E1209 09:19:49.218010 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7355067a-3964-424b-8a32-7c12a747e12b" containerName="registry-server" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.218091 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7355067a-3964-424b-8a32-7c12a747e12b" containerName="registry-server" Dec 09 09:19:49 crc kubenswrapper[4786]: E1209 09:19:49.218158 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7355067a-3964-424b-8a32-7c12a747e12b" containerName="extract-utilities" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.218231 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7355067a-3964-424b-8a32-7c12a747e12b" containerName="extract-utilities" Dec 09 09:19:49 crc kubenswrapper[4786]: E1209 09:19:49.218327 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67c7c4e-faa8-427e-953b-829c4e277994" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.218406 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67c7c4e-faa8-427e-953b-829c4e277994" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.218765 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7355067a-3964-424b-8a32-7c12a747e12b" containerName="registry-server" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.218842 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d67c7c4e-faa8-427e-953b-829c4e277994" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.219835 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.223407 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.223440 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.224706 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.225033 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.225222 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r"] Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.350910 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cks8r\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.350961 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbtwd\" (UniqueName: \"kubernetes.io/projected/e084b124-3f74-48a9-a0e4-6c9bea0d7875-kube-api-access-mbtwd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cks8r\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.351005 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cks8r\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.453363 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cks8r\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.453459 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbtwd\" (UniqueName: \"kubernetes.io/projected/e084b124-3f74-48a9-a0e4-6c9bea0d7875-kube-api-access-mbtwd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cks8r\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.453539 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cks8r\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.458596 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cks8r\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.460167 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cks8r\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.486220 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbtwd\" (UniqueName: \"kubernetes.io/projected/e084b124-3f74-48a9-a0e4-6c9bea0d7875-kube-api-access-mbtwd\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-cks8r\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:49 crc kubenswrapper[4786]: I1209 09:19:49.549320 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:19:50 crc kubenswrapper[4786]: I1209 09:19:50.136765 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r"] Dec 09 09:19:51 crc kubenswrapper[4786]: I1209 09:19:51.148707 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" event={"ID":"e084b124-3f74-48a9-a0e4-6c9bea0d7875","Type":"ContainerStarted","Data":"e02eabf146605608b1801b6c15820bebc856148c30149c6d12fe18d401e46f3d"} Dec 09 09:19:51 crc kubenswrapper[4786]: I1209 09:19:51.149267 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" event={"ID":"e084b124-3f74-48a9-a0e4-6c9bea0d7875","Type":"ContainerStarted","Data":"61466b68bd0b4a61ebd02a24fa3477989b978c6989974cace48dcceb629f57da"} Dec 09 09:19:51 crc kubenswrapper[4786]: I1209 09:19:51.169413 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" podStartSLOduration=1.748505277 podStartE2EDuration="2.169386739s" podCreationTimestamp="2025-12-09 09:19:49 +0000 UTC" firstStartedPulling="2025-12-09 09:19:50.153380389 +0000 UTC m=+2156.037001615" lastFinishedPulling="2025-12-09 09:19:50.574261851 +0000 UTC m=+2156.457883077" observedRunningTime="2025-12-09 09:19:51.161314498 +0000 UTC m=+2157.044935724" watchObservedRunningTime="2025-12-09 09:19:51.169386739 +0000 UTC m=+2157.053007965" Dec 09 09:19:55 crc kubenswrapper[4786]: I1209 09:19:55.099630 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:19:55 crc kubenswrapper[4786]: I1209 09:19:55.100192 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:19:57 crc kubenswrapper[4786]: I1209 09:19:57.065225 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7mx"] Dec 09 09:19:57 crc kubenswrapper[4786]: I1209 09:19:57.075703 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-4r7mx"] Dec 09 09:19:57 crc kubenswrapper[4786]: I1209 09:19:57.217265 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a600e791-e33a-41b1-b5d0-2cc262bac81d" path="/var/lib/kubelet/pods/a600e791-e33a-41b1-b5d0-2cc262bac81d/volumes" Dec 09 09:20:03 crc kubenswrapper[4786]: I1209 09:20:03.911695 4786 scope.go:117] "RemoveContainer" containerID="253a504921a036b1977aafd6d2220bfa1d435624b1eb322b488499922e89a3d3" Dec 09 09:20:03 crc kubenswrapper[4786]: I1209 09:20:03.969371 4786 scope.go:117] "RemoveContainer" containerID="e249edd8100860c607c11c42a039efb72d042441be7f394caea03c6f2bf32cf3" Dec 09 09:20:04 crc kubenswrapper[4786]: I1209 09:20:04.041618 4786 scope.go:117] "RemoveContainer" containerID="2f48a1c3f00dd62cee43968e44376a46b5b8dd07d15d425d49393b298cff42ab" Dec 09 09:20:17 crc kubenswrapper[4786]: I1209 09:20:17.792587 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jgrbb"] Dec 09 09:20:17 crc kubenswrapper[4786]: I1209 09:20:17.798290 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:17 crc kubenswrapper[4786]: I1209 09:20:17.804286 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgrbb"] Dec 09 09:20:17 crc kubenswrapper[4786]: I1209 09:20:17.912586 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-utilities\") pod \"certified-operators-jgrbb\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:17 crc kubenswrapper[4786]: I1209 09:20:17.912654 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzmcv\" (UniqueName: \"kubernetes.io/projected/7f112848-8620-4097-8185-f13a53d8bc4f-kube-api-access-tzmcv\") pod \"certified-operators-jgrbb\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:17 crc kubenswrapper[4786]: I1209 09:20:17.912810 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-catalog-content\") pod \"certified-operators-jgrbb\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:18 crc kubenswrapper[4786]: I1209 09:20:18.015466 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-catalog-content\") pod \"certified-operators-jgrbb\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:18 crc kubenswrapper[4786]: I1209 09:20:18.015980 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-utilities\") pod \"certified-operators-jgrbb\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:18 crc kubenswrapper[4786]: I1209 09:20:18.016039 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzmcv\" (UniqueName: \"kubernetes.io/projected/7f112848-8620-4097-8185-f13a53d8bc4f-kube-api-access-tzmcv\") pod \"certified-operators-jgrbb\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:18 crc kubenswrapper[4786]: I1209 09:20:18.016192 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-catalog-content\") pod \"certified-operators-jgrbb\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:18 crc kubenswrapper[4786]: I1209 09:20:18.016543 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-utilities\") pod \"certified-operators-jgrbb\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:18 crc kubenswrapper[4786]: I1209 09:20:18.066518 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzmcv\" (UniqueName: \"kubernetes.io/projected/7f112848-8620-4097-8185-f13a53d8bc4f-kube-api-access-tzmcv\") pod \"certified-operators-jgrbb\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:18 crc kubenswrapper[4786]: I1209 09:20:18.129046 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:18 crc kubenswrapper[4786]: I1209 09:20:18.631296 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jgrbb"] Dec 09 09:20:19 crc kubenswrapper[4786]: I1209 09:20:19.608713 4786 generic.go:334] "Generic (PLEG): container finished" podID="7f112848-8620-4097-8185-f13a53d8bc4f" containerID="7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3" exitCode=0 Dec 09 09:20:19 crc kubenswrapper[4786]: I1209 09:20:19.608824 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgrbb" event={"ID":"7f112848-8620-4097-8185-f13a53d8bc4f","Type":"ContainerDied","Data":"7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3"} Dec 09 09:20:19 crc kubenswrapper[4786]: I1209 09:20:19.609070 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgrbb" event={"ID":"7f112848-8620-4097-8185-f13a53d8bc4f","Type":"ContainerStarted","Data":"2e23cad5c7c9d88251c761db45bf2e84ae68602531600714c20741a9c2a350ca"} Dec 09 09:20:21 crc kubenswrapper[4786]: I1209 09:20:21.783852 4786 generic.go:334] "Generic (PLEG): container finished" podID="7f112848-8620-4097-8185-f13a53d8bc4f" containerID="24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7" exitCode=0 Dec 09 09:20:21 crc kubenswrapper[4786]: I1209 09:20:21.784741 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgrbb" event={"ID":"7f112848-8620-4097-8185-f13a53d8bc4f","Type":"ContainerDied","Data":"24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7"} Dec 09 09:20:22 crc kubenswrapper[4786]: I1209 09:20:22.799197 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgrbb" event={"ID":"7f112848-8620-4097-8185-f13a53d8bc4f","Type":"ContainerStarted","Data":"6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530"} Dec 09 09:20:22 crc kubenswrapper[4786]: I1209 09:20:22.825379 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jgrbb" podStartSLOduration=3.218437136 podStartE2EDuration="5.825341971s" podCreationTimestamp="2025-12-09 09:20:17 +0000 UTC" firstStartedPulling="2025-12-09 09:20:19.612638872 +0000 UTC m=+2185.496260098" lastFinishedPulling="2025-12-09 09:20:22.219543707 +0000 UTC m=+2188.103164933" observedRunningTime="2025-12-09 09:20:22.815885945 +0000 UTC m=+2188.699507171" watchObservedRunningTime="2025-12-09 09:20:22.825341971 +0000 UTC m=+2188.708963197" Dec 09 09:20:24 crc kubenswrapper[4786]: I1209 09:20:24.989067 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:20:24 crc kubenswrapper[4786]: I1209 09:20:24.989933 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:20:24 crc kubenswrapper[4786]: I1209 09:20:24.990003 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:20:24 crc kubenswrapper[4786]: I1209 09:20:24.991111 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc569af6322bbbb8d4af41759fe3e9f8f89e3ce11ef3ade02346217ef0910593"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:20:24 crc kubenswrapper[4786]: I1209 09:20:24.991191 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://fc569af6322bbbb8d4af41759fe3e9f8f89e3ce11ef3ade02346217ef0910593" gracePeriod=600 Dec 09 09:20:25 crc kubenswrapper[4786]: I1209 09:20:25.836015 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="fc569af6322bbbb8d4af41759fe3e9f8f89e3ce11ef3ade02346217ef0910593" exitCode=0 Dec 09 09:20:25 crc kubenswrapper[4786]: I1209 09:20:25.836107 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"fc569af6322bbbb8d4af41759fe3e9f8f89e3ce11ef3ade02346217ef0910593"} Dec 09 09:20:25 crc kubenswrapper[4786]: I1209 09:20:25.836453 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872"} Dec 09 09:20:25 crc kubenswrapper[4786]: I1209 09:20:25.836486 4786 scope.go:117] "RemoveContainer" containerID="b616eba668b7cb1fd5bcff5f3a2dbd4ee63df5f21169255fbb63b6a628b5d197" Dec 09 09:20:28 crc kubenswrapper[4786]: I1209 09:20:28.129988 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:28 crc kubenswrapper[4786]: I1209 09:20:28.131128 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:28 crc kubenswrapper[4786]: I1209 09:20:28.185315 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:28 crc kubenswrapper[4786]: I1209 09:20:28.944453 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:28 crc kubenswrapper[4786]: I1209 09:20:28.998876 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgrbb"] Dec 09 09:20:30 crc kubenswrapper[4786]: I1209 09:20:30.913228 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jgrbb" podUID="7f112848-8620-4097-8185-f13a53d8bc4f" containerName="registry-server" containerID="cri-o://6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530" gracePeriod=2 Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.287742 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.290104 4786 generic.go:334] "Generic (PLEG): container finished" podID="7f112848-8620-4097-8185-f13a53d8bc4f" containerID="6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530" exitCode=0 Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.290159 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgrbb" event={"ID":"7f112848-8620-4097-8185-f13a53d8bc4f","Type":"ContainerDied","Data":"6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530"} Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.290199 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jgrbb" event={"ID":"7f112848-8620-4097-8185-f13a53d8bc4f","Type":"ContainerDied","Data":"2e23cad5c7c9d88251c761db45bf2e84ae68602531600714c20741a9c2a350ca"} Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.290217 4786 scope.go:117] "RemoveContainer" containerID="6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.360582 4786 scope.go:117] "RemoveContainer" containerID="24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.371768 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-utilities\") pod \"7f112848-8620-4097-8185-f13a53d8bc4f\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.371947 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzmcv\" (UniqueName: \"kubernetes.io/projected/7f112848-8620-4097-8185-f13a53d8bc4f-kube-api-access-tzmcv\") pod \"7f112848-8620-4097-8185-f13a53d8bc4f\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.372001 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-catalog-content\") pod \"7f112848-8620-4097-8185-f13a53d8bc4f\" (UID: \"7f112848-8620-4097-8185-f13a53d8bc4f\") " Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.378062 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-utilities" (OuterVolumeSpecName: "utilities") pod "7f112848-8620-4097-8185-f13a53d8bc4f" (UID: "7f112848-8620-4097-8185-f13a53d8bc4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.379028 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.408148 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f112848-8620-4097-8185-f13a53d8bc4f-kube-api-access-tzmcv" (OuterVolumeSpecName: "kube-api-access-tzmcv") pod "7f112848-8620-4097-8185-f13a53d8bc4f" (UID: "7f112848-8620-4097-8185-f13a53d8bc4f"). InnerVolumeSpecName "kube-api-access-tzmcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.482730 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f112848-8620-4097-8185-f13a53d8bc4f" (UID: "7f112848-8620-4097-8185-f13a53d8bc4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.485010 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzmcv\" (UniqueName: \"kubernetes.io/projected/7f112848-8620-4097-8185-f13a53d8bc4f-kube-api-access-tzmcv\") on node \"crc\" DevicePath \"\"" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.485047 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f112848-8620-4097-8185-f13a53d8bc4f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.501584 4786 scope.go:117] "RemoveContainer" containerID="7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.603641 4786 scope.go:117] "RemoveContainer" containerID="6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530" Dec 09 09:20:32 crc kubenswrapper[4786]: E1209 09:20:32.605103 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530\": container with ID starting with 6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530 not found: ID does not exist" containerID="6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.605135 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530"} err="failed to get container status \"6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530\": rpc error: code = NotFound desc = could not find container \"6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530\": container with ID starting with 6c55787150c8c15d80e4bd7ce17a83b2c46d7b1d414b52f209458719b4335530 not found: ID does not exist" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.605161 4786 scope.go:117] "RemoveContainer" containerID="24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7" Dec 09 09:20:32 crc kubenswrapper[4786]: E1209 09:20:32.608878 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7\": container with ID starting with 24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7 not found: ID does not exist" containerID="24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.608956 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7"} err="failed to get container status \"24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7\": rpc error: code = NotFound desc = could not find container \"24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7\": container with ID starting with 24f9fb53be47b1e0664087b1f5f1db3b023652ef0f602d3c4babb3805765e7a7 not found: ID does not exist" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.609008 4786 scope.go:117] "RemoveContainer" containerID="7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3" Dec 09 09:20:32 crc kubenswrapper[4786]: E1209 09:20:32.611910 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3\": container with ID starting with 7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3 not found: ID does not exist" containerID="7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3" Dec 09 09:20:32 crc kubenswrapper[4786]: I1209 09:20:32.611965 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3"} err="failed to get container status \"7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3\": rpc error: code = NotFound desc = could not find container \"7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3\": container with ID starting with 7f230f18e5f3c0d16c7f873d671075a6197d75236811450d813e4797433a7ad3 not found: ID does not exist" Dec 09 09:20:33 crc kubenswrapper[4786]: I1209 09:20:33.302002 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jgrbb" Dec 09 09:20:33 crc kubenswrapper[4786]: I1209 09:20:33.325897 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jgrbb"] Dec 09 09:20:33 crc kubenswrapper[4786]: I1209 09:20:33.334747 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jgrbb"] Dec 09 09:20:35 crc kubenswrapper[4786]: I1209 09:20:35.204131 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f112848-8620-4097-8185-f13a53d8bc4f" path="/var/lib/kubelet/pods/7f112848-8620-4097-8185-f13a53d8bc4f/volumes" Dec 09 09:20:49 crc kubenswrapper[4786]: I1209 09:20:49.511322 4786 generic.go:334] "Generic (PLEG): container finished" podID="e084b124-3f74-48a9-a0e4-6c9bea0d7875" containerID="e02eabf146605608b1801b6c15820bebc856148c30149c6d12fe18d401e46f3d" exitCode=0 Dec 09 09:20:49 crc kubenswrapper[4786]: I1209 09:20:49.511444 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" event={"ID":"e084b124-3f74-48a9-a0e4-6c9bea0d7875","Type":"ContainerDied","Data":"e02eabf146605608b1801b6c15820bebc856148c30149c6d12fe18d401e46f3d"} Dec 09 09:20:50 crc kubenswrapper[4786]: I1209 09:20:50.984479 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.174346 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbtwd\" (UniqueName: \"kubernetes.io/projected/e084b124-3f74-48a9-a0e4-6c9bea0d7875-kube-api-access-mbtwd\") pod \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.174848 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-ssh-key\") pod \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.175025 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-inventory\") pod \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\" (UID: \"e084b124-3f74-48a9-a0e4-6c9bea0d7875\") " Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.181662 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e084b124-3f74-48a9-a0e4-6c9bea0d7875-kube-api-access-mbtwd" (OuterVolumeSpecName: "kube-api-access-mbtwd") pod "e084b124-3f74-48a9-a0e4-6c9bea0d7875" (UID: "e084b124-3f74-48a9-a0e4-6c9bea0d7875"). InnerVolumeSpecName "kube-api-access-mbtwd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.215198 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-inventory" (OuterVolumeSpecName: "inventory") pod "e084b124-3f74-48a9-a0e4-6c9bea0d7875" (UID: "e084b124-3f74-48a9-a0e4-6c9bea0d7875"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.223480 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e084b124-3f74-48a9-a0e4-6c9bea0d7875" (UID: "e084b124-3f74-48a9-a0e4-6c9bea0d7875"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.285197 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.285292 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbtwd\" (UniqueName: \"kubernetes.io/projected/e084b124-3f74-48a9-a0e4-6c9bea0d7875-kube-api-access-mbtwd\") on node \"crc\" DevicePath \"\"" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.285334 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e084b124-3f74-48a9-a0e4-6c9bea0d7875-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.534494 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" event={"ID":"e084b124-3f74-48a9-a0e4-6c9bea0d7875","Type":"ContainerDied","Data":"61466b68bd0b4a61ebd02a24fa3477989b978c6989974cace48dcceb629f57da"} Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.534540 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61466b68bd0b4a61ebd02a24fa3477989b978c6989974cace48dcceb629f57da" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.534633 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-cks8r" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.643646 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-swwrk"] Dec 09 09:20:51 crc kubenswrapper[4786]: E1209 09:20:51.644215 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f112848-8620-4097-8185-f13a53d8bc4f" containerName="extract-utilities" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.644243 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f112848-8620-4097-8185-f13a53d8bc4f" containerName="extract-utilities" Dec 09 09:20:51 crc kubenswrapper[4786]: E1209 09:20:51.644262 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e084b124-3f74-48a9-a0e4-6c9bea0d7875" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.644274 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e084b124-3f74-48a9-a0e4-6c9bea0d7875" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:20:51 crc kubenswrapper[4786]: E1209 09:20:51.644299 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f112848-8620-4097-8185-f13a53d8bc4f" containerName="registry-server" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.644307 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f112848-8620-4097-8185-f13a53d8bc4f" containerName="registry-server" Dec 09 09:20:51 crc kubenswrapper[4786]: E1209 09:20:51.644340 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f112848-8620-4097-8185-f13a53d8bc4f" containerName="extract-content" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.644348 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f112848-8620-4097-8185-f13a53d8bc4f" containerName="extract-content" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.644632 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e084b124-3f74-48a9-a0e4-6c9bea0d7875" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.644664 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f112848-8620-4097-8185-f13a53d8bc4f" containerName="registry-server" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.645661 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.648587 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.649174 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.649439 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.649462 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.661724 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-swwrk"] Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.794861 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-swwrk\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.795004 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nprxf\" (UniqueName: \"kubernetes.io/projected/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-kube-api-access-nprxf\") pod \"ssh-known-hosts-edpm-deployment-swwrk\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.795043 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-swwrk\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.897469 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-swwrk\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.897660 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nprxf\" (UniqueName: \"kubernetes.io/projected/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-kube-api-access-nprxf\") pod \"ssh-known-hosts-edpm-deployment-swwrk\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.897767 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-swwrk\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.903044 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-swwrk\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.903608 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-swwrk\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.919634 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nprxf\" (UniqueName: \"kubernetes.io/projected/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-kube-api-access-nprxf\") pod \"ssh-known-hosts-edpm-deployment-swwrk\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:51 crc kubenswrapper[4786]: I1209 09:20:51.975717 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:20:52 crc kubenswrapper[4786]: I1209 09:20:52.540184 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-swwrk"] Dec 09 09:20:53 crc kubenswrapper[4786]: I1209 09:20:53.554581 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" event={"ID":"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe","Type":"ContainerStarted","Data":"0477ed2126858cff33b52250c6ea0ce31bb6d3985f3c9a7b77699b769d46c364"} Dec 09 09:20:53 crc kubenswrapper[4786]: I1209 09:20:53.555198 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" event={"ID":"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe","Type":"ContainerStarted","Data":"9d53f84fbca888f901bdc6a5a6627763bf15690491d764bd9b1dc198bec0205f"} Dec 09 09:20:53 crc kubenswrapper[4786]: I1209 09:20:53.569641 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" podStartSLOduration=2.174102822 podStartE2EDuration="2.569618796s" podCreationTimestamp="2025-12-09 09:20:51 +0000 UTC" firstStartedPulling="2025-12-09 09:20:52.550464064 +0000 UTC m=+2218.434085280" lastFinishedPulling="2025-12-09 09:20:52.945980018 +0000 UTC m=+2218.829601254" observedRunningTime="2025-12-09 09:20:53.569291179 +0000 UTC m=+2219.452912395" watchObservedRunningTime="2025-12-09 09:20:53.569618796 +0000 UTC m=+2219.453240022" Dec 09 09:21:00 crc kubenswrapper[4786]: I1209 09:21:00.629834 4786 generic.go:334] "Generic (PLEG): container finished" podID="a1e77ab0-8d5d-421a-97df-bde0fa1abdfe" containerID="0477ed2126858cff33b52250c6ea0ce31bb6d3985f3c9a7b77699b769d46c364" exitCode=0 Dec 09 09:21:00 crc kubenswrapper[4786]: I1209 09:21:00.629925 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" event={"ID":"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe","Type":"ContainerDied","Data":"0477ed2126858cff33b52250c6ea0ce31bb6d3985f3c9a7b77699b769d46c364"} Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.082667 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.221593 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-ssh-key-openstack-edpm-ipam\") pod \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.221693 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-inventory-0\") pod \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.221739 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nprxf\" (UniqueName: \"kubernetes.io/projected/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-kube-api-access-nprxf\") pod \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\" (UID: \"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe\") " Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.227513 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-kube-api-access-nprxf" (OuterVolumeSpecName: "kube-api-access-nprxf") pod "a1e77ab0-8d5d-421a-97df-bde0fa1abdfe" (UID: "a1e77ab0-8d5d-421a-97df-bde0fa1abdfe"). InnerVolumeSpecName "kube-api-access-nprxf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.252192 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "a1e77ab0-8d5d-421a-97df-bde0fa1abdfe" (UID: "a1e77ab0-8d5d-421a-97df-bde0fa1abdfe"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.256185 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "a1e77ab0-8d5d-421a-97df-bde0fa1abdfe" (UID: "a1e77ab0-8d5d-421a-97df-bde0fa1abdfe"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.325132 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.325188 4786 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.325207 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nprxf\" (UniqueName: \"kubernetes.io/projected/a1e77ab0-8d5d-421a-97df-bde0fa1abdfe-kube-api-access-nprxf\") on node \"crc\" DevicePath \"\"" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.652301 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" event={"ID":"a1e77ab0-8d5d-421a-97df-bde0fa1abdfe","Type":"ContainerDied","Data":"9d53f84fbca888f901bdc6a5a6627763bf15690491d764bd9b1dc198bec0205f"} Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.652353 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d53f84fbca888f901bdc6a5a6627763bf15690491d764bd9b1dc198bec0205f" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.652416 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-swwrk" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.738105 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8"] Dec 09 09:21:02 crc kubenswrapper[4786]: E1209 09:21:02.738581 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e77ab0-8d5d-421a-97df-bde0fa1abdfe" containerName="ssh-known-hosts-edpm-deployment" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.738598 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e77ab0-8d5d-421a-97df-bde0fa1abdfe" containerName="ssh-known-hosts-edpm-deployment" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.738851 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e77ab0-8d5d-421a-97df-bde0fa1abdfe" containerName="ssh-known-hosts-edpm-deployment" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.739586 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.742056 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.742686 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.748768 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.749256 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.759093 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8"] Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.836589 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pjlq8\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.836661 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whb96\" (UniqueName: \"kubernetes.io/projected/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-kube-api-access-whb96\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pjlq8\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.836759 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pjlq8\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.939841 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pjlq8\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.939890 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whb96\" (UniqueName: \"kubernetes.io/projected/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-kube-api-access-whb96\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pjlq8\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.939975 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pjlq8\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.944183 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pjlq8\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.944193 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pjlq8\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:02 crc kubenswrapper[4786]: I1209 09:21:02.962244 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whb96\" (UniqueName: \"kubernetes.io/projected/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-kube-api-access-whb96\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pjlq8\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:03 crc kubenswrapper[4786]: I1209 09:21:03.062800 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:03 crc kubenswrapper[4786]: I1209 09:21:03.654329 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8"] Dec 09 09:21:04 crc kubenswrapper[4786]: I1209 09:21:04.782318 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" event={"ID":"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1","Type":"ContainerStarted","Data":"340cfb66abcd69e7beab6707857d310960e4a351a8d8a025c8f267bd48e8552b"} Dec 09 09:21:04 crc kubenswrapper[4786]: I1209 09:21:04.782784 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" event={"ID":"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1","Type":"ContainerStarted","Data":"c0f294a163be87aa3ec5b6b57fed3ea3fe006234a64dd679b81f2be784896b69"} Dec 09 09:21:13 crc kubenswrapper[4786]: I1209 09:21:13.868348 4786 generic.go:334] "Generic (PLEG): container finished" podID="a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1" containerID="340cfb66abcd69e7beab6707857d310960e4a351a8d8a025c8f267bd48e8552b" exitCode=0 Dec 09 09:21:13 crc kubenswrapper[4786]: I1209 09:21:13.868534 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" event={"ID":"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1","Type":"ContainerDied","Data":"340cfb66abcd69e7beab6707857d310960e4a351a8d8a025c8f267bd48e8552b"} Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.382027 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.448293 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whb96\" (UniqueName: \"kubernetes.io/projected/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-kube-api-access-whb96\") pod \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.449189 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-inventory\") pod \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.449299 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-ssh-key\") pod \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\" (UID: \"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1\") " Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.456700 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-kube-api-access-whb96" (OuterVolumeSpecName: "kube-api-access-whb96") pod "a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1" (UID: "a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1"). InnerVolumeSpecName "kube-api-access-whb96". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.485279 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1" (UID: "a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.491788 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-inventory" (OuterVolumeSpecName: "inventory") pod "a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1" (UID: "a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.551901 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whb96\" (UniqueName: \"kubernetes.io/projected/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-kube-api-access-whb96\") on node \"crc\" DevicePath \"\"" Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.551969 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.551982 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.893197 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" event={"ID":"a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1","Type":"ContainerDied","Data":"c0f294a163be87aa3ec5b6b57fed3ea3fe006234a64dd679b81f2be784896b69"} Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.893271 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0f294a163be87aa3ec5b6b57fed3ea3fe006234a64dd679b81f2be784896b69" Dec 09 09:21:15 crc kubenswrapper[4786]: I1209 09:21:15.893302 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pjlq8" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.008605 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9"] Dec 09 09:21:16 crc kubenswrapper[4786]: E1209 09:21:16.012300 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.012328 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.015198 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.019031 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.024079 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.024261 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.026855 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.027081 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.060014 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9"] Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.068536 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.068703 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z796m\" (UniqueName: \"kubernetes.io/projected/e5211467-fc31-4051-8e46-6b59e77d217b-kube-api-access-z796m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.068842 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.171810 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z796m\" (UniqueName: \"kubernetes.io/projected/e5211467-fc31-4051-8e46-6b59e77d217b-kube-api-access-z796m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.172025 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.172085 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.179011 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.179105 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.219124 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z796m\" (UniqueName: \"kubernetes.io/projected/e5211467-fc31-4051-8e46-6b59e77d217b-kube-api-access-z796m\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.380918 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.839062 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9"] Dec 09 09:21:16 crc kubenswrapper[4786]: I1209 09:21:16.908842 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" event={"ID":"e5211467-fc31-4051-8e46-6b59e77d217b","Type":"ContainerStarted","Data":"fab692a45f1a013cedc403f463075d2c38c669884a8c8177931d8c727f0ded07"} Dec 09 09:21:17 crc kubenswrapper[4786]: I1209 09:21:17.920830 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" event={"ID":"e5211467-fc31-4051-8e46-6b59e77d217b","Type":"ContainerStarted","Data":"03533e2cac154ef0bafb6758f0955cc28254803dd43f21ed851a22c26652aadd"} Dec 09 09:21:17 crc kubenswrapper[4786]: I1209 09:21:17.946606 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" podStartSLOduration=2.244146433 podStartE2EDuration="2.945409989s" podCreationTimestamp="2025-12-09 09:21:15 +0000 UTC" firstStartedPulling="2025-12-09 09:21:16.84555709 +0000 UTC m=+2242.729178326" lastFinishedPulling="2025-12-09 09:21:17.546820656 +0000 UTC m=+2243.430441882" observedRunningTime="2025-12-09 09:21:17.940876436 +0000 UTC m=+2243.824497662" watchObservedRunningTime="2025-12-09 09:21:17.945409989 +0000 UTC m=+2243.829031215" Dec 09 09:21:28 crc kubenswrapper[4786]: I1209 09:21:28.034108 4786 generic.go:334] "Generic (PLEG): container finished" podID="e5211467-fc31-4051-8e46-6b59e77d217b" containerID="03533e2cac154ef0bafb6758f0955cc28254803dd43f21ed851a22c26652aadd" exitCode=0 Dec 09 09:21:28 crc kubenswrapper[4786]: I1209 09:21:28.034205 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" event={"ID":"e5211467-fc31-4051-8e46-6b59e77d217b","Type":"ContainerDied","Data":"03533e2cac154ef0bafb6758f0955cc28254803dd43f21ed851a22c26652aadd"} Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.504177 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.634403 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-inventory\") pod \"e5211467-fc31-4051-8e46-6b59e77d217b\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.634666 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z796m\" (UniqueName: \"kubernetes.io/projected/e5211467-fc31-4051-8e46-6b59e77d217b-kube-api-access-z796m\") pod \"e5211467-fc31-4051-8e46-6b59e77d217b\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.634694 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-ssh-key\") pod \"e5211467-fc31-4051-8e46-6b59e77d217b\" (UID: \"e5211467-fc31-4051-8e46-6b59e77d217b\") " Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.641710 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5211467-fc31-4051-8e46-6b59e77d217b-kube-api-access-z796m" (OuterVolumeSpecName: "kube-api-access-z796m") pod "e5211467-fc31-4051-8e46-6b59e77d217b" (UID: "e5211467-fc31-4051-8e46-6b59e77d217b"). InnerVolumeSpecName "kube-api-access-z796m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.666286 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-inventory" (OuterVolumeSpecName: "inventory") pod "e5211467-fc31-4051-8e46-6b59e77d217b" (UID: "e5211467-fc31-4051-8e46-6b59e77d217b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.671671 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e5211467-fc31-4051-8e46-6b59e77d217b" (UID: "e5211467-fc31-4051-8e46-6b59e77d217b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.738074 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z796m\" (UniqueName: \"kubernetes.io/projected/e5211467-fc31-4051-8e46-6b59e77d217b-kube-api-access-z796m\") on node \"crc\" DevicePath \"\"" Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.738485 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:21:29 crc kubenswrapper[4786]: I1209 09:21:29.738501 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e5211467-fc31-4051-8e46-6b59e77d217b-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.058322 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" event={"ID":"e5211467-fc31-4051-8e46-6b59e77d217b","Type":"ContainerDied","Data":"fab692a45f1a013cedc403f463075d2c38c669884a8c8177931d8c727f0ded07"} Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.058648 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fab692a45f1a013cedc403f463075d2c38c669884a8c8177931d8c727f0ded07" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.058397 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.143863 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2"] Dec 09 09:21:30 crc kubenswrapper[4786]: E1209 09:21:30.144355 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5211467-fc31-4051-8e46-6b59e77d217b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.144374 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5211467-fc31-4051-8e46-6b59e77d217b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.144667 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5211467-fc31-4051-8e46-6b59e77d217b" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.145408 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.147915 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.147935 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.148124 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.148215 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.148406 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.148517 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.151148 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.159098 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.161732 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2"] Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.249219 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.249327 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.249408 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txfhg\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-kube-api-access-txfhg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.249503 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.249597 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.249659 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.249764 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.249799 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.249910 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.250003 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.250102 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.250144 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.250175 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.250266 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.352533 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.352631 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.352676 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.352743 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.352779 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.352813 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.352868 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.352969 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.353017 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.353060 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.353119 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.353198 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.353252 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.353277 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txfhg\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-kube-api-access-txfhg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.358235 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.359525 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.359819 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.359956 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.360636 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.361158 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.362595 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.364768 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.364995 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.365152 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.365629 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.373320 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.374630 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.379599 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txfhg\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-kube-api-access-txfhg\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:30 crc kubenswrapper[4786]: I1209 09:21:30.461083 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:21:31 crc kubenswrapper[4786]: I1209 09:21:31.037279 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2"] Dec 09 09:21:31 crc kubenswrapper[4786]: I1209 09:21:31.069585 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" event={"ID":"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e","Type":"ContainerStarted","Data":"1933eb06eec938585ec960ca0094f9b82b28fa60091db2b811b2c01919f127e4"} Dec 09 09:21:32 crc kubenswrapper[4786]: I1209 09:21:32.080105 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" event={"ID":"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e","Type":"ContainerStarted","Data":"48c0fbeeb43ea4dee5fdec52151b1f487083a1fbc7b6a9e0f2b8402ef99ac6fa"} Dec 09 09:21:32 crc kubenswrapper[4786]: I1209 09:21:32.105384 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" podStartSLOduration=1.704097153 podStartE2EDuration="2.105363218s" podCreationTimestamp="2025-12-09 09:21:30 +0000 UTC" firstStartedPulling="2025-12-09 09:21:31.041042928 +0000 UTC m=+2256.924664154" lastFinishedPulling="2025-12-09 09:21:31.442308993 +0000 UTC m=+2257.325930219" observedRunningTime="2025-12-09 09:21:32.095603146 +0000 UTC m=+2257.979224372" watchObservedRunningTime="2025-12-09 09:21:32.105363218 +0000 UTC m=+2257.988984444" Dec 09 09:22:13 crc kubenswrapper[4786]: I1209 09:22:13.563804 4786 generic.go:334] "Generic (PLEG): container finished" podID="a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" containerID="48c0fbeeb43ea4dee5fdec52151b1f487083a1fbc7b6a9e0f2b8402ef99ac6fa" exitCode=0 Dec 09 09:22:13 crc kubenswrapper[4786]: I1209 09:22:13.563879 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" event={"ID":"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e","Type":"ContainerDied","Data":"48c0fbeeb43ea4dee5fdec52151b1f487083a1fbc7b6a9e0f2b8402ef99ac6fa"} Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.028710 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.105274 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.105950 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ssh-key\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106018 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-repo-setup-combined-ca-bundle\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106053 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ovn-combined-ca-bundle\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106096 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-libvirt-combined-ca-bundle\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106122 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-bootstrap-combined-ca-bundle\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106263 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-telemetry-combined-ca-bundle\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106296 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-nova-combined-ca-bundle\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106343 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-neutron-metadata-combined-ca-bundle\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106391 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txfhg\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-kube-api-access-txfhg\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106412 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-inventory\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106559 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106616 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-ovn-default-certs-0\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.106708 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\" (UID: \"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e\") " Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.116537 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.116688 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.116765 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.117049 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-kube-api-access-txfhg" (OuterVolumeSpecName: "kube-api-access-txfhg") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "kube-api-access-txfhg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.117375 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.117498 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.119339 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.119370 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.119714 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.121151 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.134377 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.134543 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.152818 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-inventory" (OuterVolumeSpecName: "inventory") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.164642 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" (UID: "a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210319 4786 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210360 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210378 4786 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210393 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210413 4786 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210441 4786 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210459 4786 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210471 4786 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210483 4786 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210495 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txfhg\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-kube-api-access-txfhg\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210507 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210524 4786 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210537 4786 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.210554 4786 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.585355 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" event={"ID":"a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e","Type":"ContainerDied","Data":"1933eb06eec938585ec960ca0094f9b82b28fa60091db2b811b2c01919f127e4"} Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.585410 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1933eb06eec938585ec960ca0094f9b82b28fa60091db2b811b2c01919f127e4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.585608 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.761313 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4"] Dec 09 09:22:15 crc kubenswrapper[4786]: E1209 09:22:15.762092 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.762142 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.762405 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.763552 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.768023 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.768397 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.768408 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.769978 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.770047 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.781305 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4"] Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.826774 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.826835 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.827061 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/38e0ab2d-0650-44b5-bc00-adeb40608783-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.827391 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.827762 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6kj9\" (UniqueName: \"kubernetes.io/projected/38e0ab2d-0650-44b5-bc00-adeb40608783-kube-api-access-h6kj9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.930450 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.930556 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.930643 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/38e0ab2d-0650-44b5-bc00-adeb40608783-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.930690 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.930734 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6kj9\" (UniqueName: \"kubernetes.io/projected/38e0ab2d-0650-44b5-bc00-adeb40608783-kube-api-access-h6kj9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.932161 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/38e0ab2d-0650-44b5-bc00-adeb40608783-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.936350 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.938252 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.941944 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:15 crc kubenswrapper[4786]: I1209 09:22:15.956525 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6kj9\" (UniqueName: \"kubernetes.io/projected/38e0ab2d-0650-44b5-bc00-adeb40608783-kube-api-access-h6kj9\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jhml4\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:16 crc kubenswrapper[4786]: I1209 09:22:16.099416 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:22:16 crc kubenswrapper[4786]: I1209 09:22:16.706259 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4"] Dec 09 09:22:17 crc kubenswrapper[4786]: I1209 09:22:17.609483 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" event={"ID":"38e0ab2d-0650-44b5-bc00-adeb40608783","Type":"ContainerStarted","Data":"b316d0ef133cf3611fe7c5def7cafc311fdebc2575afb07f724f208fa26147e6"} Dec 09 09:22:17 crc kubenswrapper[4786]: I1209 09:22:17.610031 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" event={"ID":"38e0ab2d-0650-44b5-bc00-adeb40608783","Type":"ContainerStarted","Data":"4b9ea2a6f89159673c56eb8cd5b790e2163ff72b5f3892ccc94b073787b9978b"} Dec 09 09:22:17 crc kubenswrapper[4786]: I1209 09:22:17.663450 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" podStartSLOduration=2.059865031 podStartE2EDuration="2.663402151s" podCreationTimestamp="2025-12-09 09:22:15 +0000 UTC" firstStartedPulling="2025-12-09 09:22:16.718271384 +0000 UTC m=+2302.601892610" lastFinishedPulling="2025-12-09 09:22:17.321808514 +0000 UTC m=+2303.205429730" observedRunningTime="2025-12-09 09:22:17.654936438 +0000 UTC m=+2303.538557664" watchObservedRunningTime="2025-12-09 09:22:17.663402151 +0000 UTC m=+2303.547023377" Dec 09 09:22:54 crc kubenswrapper[4786]: I1209 09:22:54.988950 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:22:54 crc kubenswrapper[4786]: I1209 09:22:54.989524 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:23:24 crc kubenswrapper[4786]: I1209 09:23:24.990500 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:23:24 crc kubenswrapper[4786]: I1209 09:23:24.991018 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:23:29 crc kubenswrapper[4786]: I1209 09:23:29.415321 4786 generic.go:334] "Generic (PLEG): container finished" podID="38e0ab2d-0650-44b5-bc00-adeb40608783" containerID="b316d0ef133cf3611fe7c5def7cafc311fdebc2575afb07f724f208fa26147e6" exitCode=0 Dec 09 09:23:29 crc kubenswrapper[4786]: I1209 09:23:29.415389 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" event={"ID":"38e0ab2d-0650-44b5-bc00-adeb40608783","Type":"ContainerDied","Data":"b316d0ef133cf3611fe7c5def7cafc311fdebc2575afb07f724f208fa26147e6"} Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.858275 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.963023 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-inventory\") pod \"38e0ab2d-0650-44b5-bc00-adeb40608783\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.963231 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/38e0ab2d-0650-44b5-bc00-adeb40608783-ovncontroller-config-0\") pod \"38e0ab2d-0650-44b5-bc00-adeb40608783\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.963495 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ovn-combined-ca-bundle\") pod \"38e0ab2d-0650-44b5-bc00-adeb40608783\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.963559 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ssh-key\") pod \"38e0ab2d-0650-44b5-bc00-adeb40608783\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.963633 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6kj9\" (UniqueName: \"kubernetes.io/projected/38e0ab2d-0650-44b5-bc00-adeb40608783-kube-api-access-h6kj9\") pod \"38e0ab2d-0650-44b5-bc00-adeb40608783\" (UID: \"38e0ab2d-0650-44b5-bc00-adeb40608783\") " Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.970611 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "38e0ab2d-0650-44b5-bc00-adeb40608783" (UID: "38e0ab2d-0650-44b5-bc00-adeb40608783"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.972664 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e0ab2d-0650-44b5-bc00-adeb40608783-kube-api-access-h6kj9" (OuterVolumeSpecName: "kube-api-access-h6kj9") pod "38e0ab2d-0650-44b5-bc00-adeb40608783" (UID: "38e0ab2d-0650-44b5-bc00-adeb40608783"). InnerVolumeSpecName "kube-api-access-h6kj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.997685 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38e0ab2d-0650-44b5-bc00-adeb40608783-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "38e0ab2d-0650-44b5-bc00-adeb40608783" (UID: "38e0ab2d-0650-44b5-bc00-adeb40608783"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.997806 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "38e0ab2d-0650-44b5-bc00-adeb40608783" (UID: "38e0ab2d-0650-44b5-bc00-adeb40608783"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:23:30 crc kubenswrapper[4786]: I1209 09:23:30.998453 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-inventory" (OuterVolumeSpecName: "inventory") pod "38e0ab2d-0650-44b5-bc00-adeb40608783" (UID: "38e0ab2d-0650-44b5-bc00-adeb40608783"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.068775 4786 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.068849 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.068867 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6kj9\" (UniqueName: \"kubernetes.io/projected/38e0ab2d-0650-44b5-bc00-adeb40608783-kube-api-access-h6kj9\") on node \"crc\" DevicePath \"\"" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.068881 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/38e0ab2d-0650-44b5-bc00-adeb40608783-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.068911 4786 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/38e0ab2d-0650-44b5-bc00-adeb40608783-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.446004 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" event={"ID":"38e0ab2d-0650-44b5-bc00-adeb40608783","Type":"ContainerDied","Data":"4b9ea2a6f89159673c56eb8cd5b790e2163ff72b5f3892ccc94b073787b9978b"} Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.446588 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b9ea2a6f89159673c56eb8cd5b790e2163ff72b5f3892ccc94b073787b9978b" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.446106 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jhml4" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.576583 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn"] Dec 09 09:23:31 crc kubenswrapper[4786]: E1209 09:23:31.577152 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e0ab2d-0650-44b5-bc00-adeb40608783" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.577182 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e0ab2d-0650-44b5-bc00-adeb40608783" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.577519 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e0ab2d-0650-44b5-bc00-adeb40608783" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.578489 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.582366 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.582569 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.582982 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.583139 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.583319 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.589169 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn"] Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.589843 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.681872 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.681971 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.682060 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.682101 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.682476 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.682555 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsxh6\" (UniqueName: \"kubernetes.io/projected/17753443-b80a-43e1-9256-b7c0f392dad5-kube-api-access-jsxh6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.784540 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.784597 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jsxh6\" (UniqueName: \"kubernetes.io/projected/17753443-b80a-43e1-9256-b7c0f392dad5-kube-api-access-jsxh6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.784681 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.784742 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.784820 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.784858 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.790541 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.791656 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.792246 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.792871 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.793184 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.805916 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jsxh6\" (UniqueName: \"kubernetes.io/projected/17753443-b80a-43e1-9256-b7c0f392dad5-kube-api-access-jsxh6\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:31 crc kubenswrapper[4786]: I1209 09:23:31.900418 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:23:32 crc kubenswrapper[4786]: I1209 09:23:32.473108 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn"] Dec 09 09:23:32 crc kubenswrapper[4786]: W1209 09:23:32.479227 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17753443_b80a_43e1_9256_b7c0f392dad5.slice/crio-c94b461a8a9c81dc542e146acb17ed658fcec76aa621bfc6c12463ecec5f2b5e WatchSource:0}: Error finding container c94b461a8a9c81dc542e146acb17ed658fcec76aa621bfc6c12463ecec5f2b5e: Status 404 returned error can't find the container with id c94b461a8a9c81dc542e146acb17ed658fcec76aa621bfc6c12463ecec5f2b5e Dec 09 09:23:32 crc kubenswrapper[4786]: I1209 09:23:32.483747 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:23:33 crc kubenswrapper[4786]: I1209 09:23:33.469283 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" event={"ID":"17753443-b80a-43e1-9256-b7c0f392dad5","Type":"ContainerStarted","Data":"dff808f6da1b7cf1bc12481906c08befe00f73aa476770805002b1b73b117096"} Dec 09 09:23:33 crc kubenswrapper[4786]: I1209 09:23:33.469807 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" event={"ID":"17753443-b80a-43e1-9256-b7c0f392dad5","Type":"ContainerStarted","Data":"c94b461a8a9c81dc542e146acb17ed658fcec76aa621bfc6c12463ecec5f2b5e"} Dec 09 09:23:54 crc kubenswrapper[4786]: I1209 09:23:54.989370 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:23:54 crc kubenswrapper[4786]: I1209 09:23:54.990167 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:23:54 crc kubenswrapper[4786]: I1209 09:23:54.990238 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:23:54 crc kubenswrapper[4786]: I1209 09:23:54.991203 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:23:54 crc kubenswrapper[4786]: I1209 09:23:54.991276 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" gracePeriod=600 Dec 09 09:23:55 crc kubenswrapper[4786]: E1209 09:23:55.127687 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:23:55 crc kubenswrapper[4786]: I1209 09:23:55.696834 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" exitCode=0 Dec 09 09:23:55 crc kubenswrapper[4786]: I1209 09:23:55.696901 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872"} Dec 09 09:23:55 crc kubenswrapper[4786]: I1209 09:23:55.696977 4786 scope.go:117] "RemoveContainer" containerID="fc569af6322bbbb8d4af41759fe3e9f8f89e3ce11ef3ade02346217ef0910593" Dec 09 09:23:55 crc kubenswrapper[4786]: I1209 09:23:55.697971 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:23:55 crc kubenswrapper[4786]: E1209 09:23:55.698395 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:23:55 crc kubenswrapper[4786]: I1209 09:23:55.737947 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" podStartSLOduration=24.349811956 podStartE2EDuration="24.737923721s" podCreationTimestamp="2025-12-09 09:23:31 +0000 UTC" firstStartedPulling="2025-12-09 09:23:32.483451449 +0000 UTC m=+2378.367072685" lastFinishedPulling="2025-12-09 09:23:32.871563224 +0000 UTC m=+2378.755184450" observedRunningTime="2025-12-09 09:23:33.492452869 +0000 UTC m=+2379.376074095" watchObservedRunningTime="2025-12-09 09:23:55.737923721 +0000 UTC m=+2401.621544957" Dec 09 09:24:08 crc kubenswrapper[4786]: I1209 09:24:08.188525 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:24:08 crc kubenswrapper[4786]: E1209 09:24:08.189714 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:24:23 crc kubenswrapper[4786]: I1209 09:24:23.188503 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:24:23 crc kubenswrapper[4786]: E1209 09:24:23.189297 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:24:25 crc kubenswrapper[4786]: I1209 09:24:25.010197 4786 generic.go:334] "Generic (PLEG): container finished" podID="17753443-b80a-43e1-9256-b7c0f392dad5" containerID="dff808f6da1b7cf1bc12481906c08befe00f73aa476770805002b1b73b117096" exitCode=0 Dec 09 09:24:25 crc kubenswrapper[4786]: I1209 09:24:25.010290 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" event={"ID":"17753443-b80a-43e1-9256-b7c0f392dad5","Type":"ContainerDied","Data":"dff808f6da1b7cf1bc12481906c08befe00f73aa476770805002b1b73b117096"} Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.426537 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.600056 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jsxh6\" (UniqueName: \"kubernetes.io/projected/17753443-b80a-43e1-9256-b7c0f392dad5-kube-api-access-jsxh6\") pod \"17753443-b80a-43e1-9256-b7c0f392dad5\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.600232 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-ovn-metadata-agent-neutron-config-0\") pod \"17753443-b80a-43e1-9256-b7c0f392dad5\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.600284 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-inventory\") pod \"17753443-b80a-43e1-9256-b7c0f392dad5\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.600329 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-ssh-key\") pod \"17753443-b80a-43e1-9256-b7c0f392dad5\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.600387 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-metadata-combined-ca-bundle\") pod \"17753443-b80a-43e1-9256-b7c0f392dad5\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.600462 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-nova-metadata-neutron-config-0\") pod \"17753443-b80a-43e1-9256-b7c0f392dad5\" (UID: \"17753443-b80a-43e1-9256-b7c0f392dad5\") " Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.607116 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "17753443-b80a-43e1-9256-b7c0f392dad5" (UID: "17753443-b80a-43e1-9256-b7c0f392dad5"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.607266 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17753443-b80a-43e1-9256-b7c0f392dad5-kube-api-access-jsxh6" (OuterVolumeSpecName: "kube-api-access-jsxh6") pod "17753443-b80a-43e1-9256-b7c0f392dad5" (UID: "17753443-b80a-43e1-9256-b7c0f392dad5"). InnerVolumeSpecName "kube-api-access-jsxh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.629485 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-inventory" (OuterVolumeSpecName: "inventory") pod "17753443-b80a-43e1-9256-b7c0f392dad5" (UID: "17753443-b80a-43e1-9256-b7c0f392dad5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.639077 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "17753443-b80a-43e1-9256-b7c0f392dad5" (UID: "17753443-b80a-43e1-9256-b7c0f392dad5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.639436 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "17753443-b80a-43e1-9256-b7c0f392dad5" (UID: "17753443-b80a-43e1-9256-b7c0f392dad5"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.640967 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "17753443-b80a-43e1-9256-b7c0f392dad5" (UID: "17753443-b80a-43e1-9256-b7c0f392dad5"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.703911 4786 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.703948 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jsxh6\" (UniqueName: \"kubernetes.io/projected/17753443-b80a-43e1-9256-b7c0f392dad5-kube-api-access-jsxh6\") on node \"crc\" DevicePath \"\"" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.703959 4786 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.703970 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.703981 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:24:26 crc kubenswrapper[4786]: I1209 09:24:26.703989 4786 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17753443-b80a-43e1-9256-b7c0f392dad5-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.033492 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" event={"ID":"17753443-b80a-43e1-9256-b7c0f392dad5","Type":"ContainerDied","Data":"c94b461a8a9c81dc542e146acb17ed658fcec76aa621bfc6c12463ecec5f2b5e"} Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.033549 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.033565 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c94b461a8a9c81dc542e146acb17ed658fcec76aa621bfc6c12463ecec5f2b5e" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.143134 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht"] Dec 09 09:24:27 crc kubenswrapper[4786]: E1209 09:24:27.143711 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17753443-b80a-43e1-9256-b7c0f392dad5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.143737 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="17753443-b80a-43e1-9256-b7c0f392dad5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.144024 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="17753443-b80a-43e1-9256-b7c0f392dad5" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.144911 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.149929 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.149997 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.150313 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.150526 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.150734 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.153230 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht"] Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.316834 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.316933 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.317039 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh4dw\" (UniqueName: \"kubernetes.io/projected/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-kube-api-access-bh4dw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.317123 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.317228 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.419791 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh4dw\" (UniqueName: \"kubernetes.io/projected/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-kube-api-access-bh4dw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.419911 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.420006 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.420176 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.420395 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.424296 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.424320 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.425036 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.436464 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.444226 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh4dw\" (UniqueName: \"kubernetes.io/projected/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-kube-api-access-bh4dw\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-hslht\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:27 crc kubenswrapper[4786]: I1209 09:24:27.472013 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:24:28 crc kubenswrapper[4786]: I1209 09:24:28.089687 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht"] Dec 09 09:24:29 crc kubenswrapper[4786]: I1209 09:24:29.055115 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" event={"ID":"e0e78e74-699b-442e-a5bd-6c598b2e0fb4","Type":"ContainerStarted","Data":"ddf1a0ca5c4bf5b4550de0b6205915660fcd57c20b300fe8426b323ca71555d6"} Dec 09 09:24:29 crc kubenswrapper[4786]: I1209 09:24:29.055728 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" event={"ID":"e0e78e74-699b-442e-a5bd-6c598b2e0fb4","Type":"ContainerStarted","Data":"f508abf6f4ac556347cff75abe07e4394bf1861f0d8148a84f9a9e2b2884726c"} Dec 09 09:24:34 crc kubenswrapper[4786]: I1209 09:24:34.189063 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:24:34 crc kubenswrapper[4786]: E1209 09:24:34.190351 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:24:45 crc kubenswrapper[4786]: I1209 09:24:45.198452 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:24:45 crc kubenswrapper[4786]: E1209 09:24:45.199378 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:24:59 crc kubenswrapper[4786]: I1209 09:24:59.188861 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:24:59 crc kubenswrapper[4786]: E1209 09:24:59.189612 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:25:12 crc kubenswrapper[4786]: I1209 09:25:12.190643 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:25:12 crc kubenswrapper[4786]: E1209 09:25:12.191337 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:25:26 crc kubenswrapper[4786]: I1209 09:25:26.188166 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:25:26 crc kubenswrapper[4786]: E1209 09:25:26.188932 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:25:40 crc kubenswrapper[4786]: I1209 09:25:40.189101 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:25:40 crc kubenswrapper[4786]: E1209 09:25:40.189945 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:25:53 crc kubenswrapper[4786]: I1209 09:25:53.188816 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:25:53 crc kubenswrapper[4786]: E1209 09:25:53.189795 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:26:04 crc kubenswrapper[4786]: I1209 09:26:04.188362 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:26:04 crc kubenswrapper[4786]: E1209 09:26:04.189164 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:26:04 crc kubenswrapper[4786]: I1209 09:26:04.378599 4786 scope.go:117] "RemoveContainer" containerID="8721a7a62116d508242342ee06e01b93e7929d743af094a312dc6f49b0ff4cd4" Dec 09 09:26:04 crc kubenswrapper[4786]: I1209 09:26:04.410288 4786 scope.go:117] "RemoveContainer" containerID="9a1cc0974e24696a95bdf9dbdb6513a5da2b66b00c2dc91383bbce928402fb59" Dec 09 09:26:04 crc kubenswrapper[4786]: I1209 09:26:04.434785 4786 scope.go:117] "RemoveContainer" containerID="8e4137265a418de7dcc09ebb0e93923d933c421eab87677a71506f63ad3e6609" Dec 09 09:26:04 crc kubenswrapper[4786]: I1209 09:26:04.495931 4786 scope.go:117] "RemoveContainer" containerID="645db5797edfc4875d9391897124cd8f123215d75e14b7b042c26771bb9fc113" Dec 09 09:26:04 crc kubenswrapper[4786]: I1209 09:26:04.553779 4786 scope.go:117] "RemoveContainer" containerID="3f0f22b2f4b70b70b7e1facaa8ef4b9bcbe775df0f77ff21cd3c387e4873cd4f" Dec 09 09:26:04 crc kubenswrapper[4786]: I1209 09:26:04.595399 4786 scope.go:117] "RemoveContainer" containerID="3ceb19712a62faa0cb5962e977a43a9290b4243667c5a1aa8d3d0379ebbc9bc1" Dec 09 09:26:19 crc kubenswrapper[4786]: I1209 09:26:19.187962 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:26:19 crc kubenswrapper[4786]: E1209 09:26:19.188970 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:26:30 crc kubenswrapper[4786]: I1209 09:26:30.188803 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:26:30 crc kubenswrapper[4786]: E1209 09:26:30.189729 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:26:44 crc kubenswrapper[4786]: I1209 09:26:44.189642 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:26:44 crc kubenswrapper[4786]: E1209 09:26:44.191158 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:26:59 crc kubenswrapper[4786]: I1209 09:26:59.189890 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:26:59 crc kubenswrapper[4786]: E1209 09:26:59.190783 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:27:14 crc kubenswrapper[4786]: I1209 09:27:14.188263 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:27:14 crc kubenswrapper[4786]: E1209 09:27:14.189096 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:27:29 crc kubenswrapper[4786]: I1209 09:27:29.188481 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:27:29 crc kubenswrapper[4786]: E1209 09:27:29.189179 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:27:42 crc kubenswrapper[4786]: I1209 09:27:42.923864 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:27:42 crc kubenswrapper[4786]: E1209 09:27:42.925802 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:27:54 crc kubenswrapper[4786]: I1209 09:27:54.187789 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:27:54 crc kubenswrapper[4786]: E1209 09:27:54.188885 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:28:08 crc kubenswrapper[4786]: I1209 09:28:08.188830 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:28:08 crc kubenswrapper[4786]: E1209 09:28:08.190013 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.628868 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" podStartSLOduration=233.238430413 podStartE2EDuration="3m53.628848849s" podCreationTimestamp="2025-12-09 09:24:27 +0000 UTC" firstStartedPulling="2025-12-09 09:24:28.094800266 +0000 UTC m=+2433.978421492" lastFinishedPulling="2025-12-09 09:24:28.485218662 +0000 UTC m=+2434.368839928" observedRunningTime="2025-12-09 09:24:29.077262404 +0000 UTC m=+2434.960883650" watchObservedRunningTime="2025-12-09 09:28:20.628848849 +0000 UTC m=+2666.512470085" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.637070 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rlcbh"] Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.639618 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.652629 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rlcbh"] Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.755854 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j58vr\" (UniqueName: \"kubernetes.io/projected/1d8cfeae-80df-4fa9-84f3-c597511944a7-kube-api-access-j58vr\") pod \"redhat-operators-rlcbh\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.755961 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-utilities\") pod \"redhat-operators-rlcbh\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.756100 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-catalog-content\") pod \"redhat-operators-rlcbh\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.858880 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-utilities\") pod \"redhat-operators-rlcbh\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.858943 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-catalog-content\") pod \"redhat-operators-rlcbh\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.859136 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j58vr\" (UniqueName: \"kubernetes.io/projected/1d8cfeae-80df-4fa9-84f3-c597511944a7-kube-api-access-j58vr\") pod \"redhat-operators-rlcbh\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.859588 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-utilities\") pod \"redhat-operators-rlcbh\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.859788 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-catalog-content\") pod \"redhat-operators-rlcbh\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.881711 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j58vr\" (UniqueName: \"kubernetes.io/projected/1d8cfeae-80df-4fa9-84f3-c597511944a7-kube-api-access-j58vr\") pod \"redhat-operators-rlcbh\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:20 crc kubenswrapper[4786]: I1209 09:28:20.971159 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:21 crc kubenswrapper[4786]: I1209 09:28:21.474560 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rlcbh"] Dec 09 09:28:22 crc kubenswrapper[4786]: I1209 09:28:22.326502 4786 generic.go:334] "Generic (PLEG): container finished" podID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerID="92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1" exitCode=0 Dec 09 09:28:22 crc kubenswrapper[4786]: I1209 09:28:22.326574 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlcbh" event={"ID":"1d8cfeae-80df-4fa9-84f3-c597511944a7","Type":"ContainerDied","Data":"92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1"} Dec 09 09:28:22 crc kubenswrapper[4786]: I1209 09:28:22.326864 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlcbh" event={"ID":"1d8cfeae-80df-4fa9-84f3-c597511944a7","Type":"ContainerStarted","Data":"f2efd39db1e101e715d7909c0ec8798c5a2ed4593bae84bc85f69f2f6a4c76d5"} Dec 09 09:28:23 crc kubenswrapper[4786]: I1209 09:28:23.189896 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:28:23 crc kubenswrapper[4786]: E1209 09:28:23.190468 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:28:23 crc kubenswrapper[4786]: I1209 09:28:23.337257 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlcbh" event={"ID":"1d8cfeae-80df-4fa9-84f3-c597511944a7","Type":"ContainerStarted","Data":"3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3"} Dec 09 09:28:24 crc kubenswrapper[4786]: I1209 09:28:24.351355 4786 generic.go:334] "Generic (PLEG): container finished" podID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerID="3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3" exitCode=0 Dec 09 09:28:24 crc kubenswrapper[4786]: I1209 09:28:24.351475 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlcbh" event={"ID":"1d8cfeae-80df-4fa9-84f3-c597511944a7","Type":"ContainerDied","Data":"3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3"} Dec 09 09:28:26 crc kubenswrapper[4786]: I1209 09:28:26.394128 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlcbh" event={"ID":"1d8cfeae-80df-4fa9-84f3-c597511944a7","Type":"ContainerStarted","Data":"a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d"} Dec 09 09:28:26 crc kubenswrapper[4786]: I1209 09:28:26.416221 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rlcbh" podStartSLOduration=3.669645277 podStartE2EDuration="6.416199952s" podCreationTimestamp="2025-12-09 09:28:20 +0000 UTC" firstStartedPulling="2025-12-09 09:28:22.329611358 +0000 UTC m=+2668.213232594" lastFinishedPulling="2025-12-09 09:28:25.076166043 +0000 UTC m=+2670.959787269" observedRunningTime="2025-12-09 09:28:26.41164665 +0000 UTC m=+2672.295267876" watchObservedRunningTime="2025-12-09 09:28:26.416199952 +0000 UTC m=+2672.299821178" Dec 09 09:28:30 crc kubenswrapper[4786]: I1209 09:28:30.971716 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:30 crc kubenswrapper[4786]: I1209 09:28:30.971974 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:31 crc kubenswrapper[4786]: I1209 09:28:31.025886 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:31 crc kubenswrapper[4786]: I1209 09:28:31.498016 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:31 crc kubenswrapper[4786]: I1209 09:28:31.561338 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rlcbh"] Dec 09 09:28:33 crc kubenswrapper[4786]: I1209 09:28:33.463712 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rlcbh" podUID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerName="registry-server" containerID="cri-o://a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d" gracePeriod=2 Dec 09 09:28:33 crc kubenswrapper[4786]: I1209 09:28:33.929517 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.085099 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j58vr\" (UniqueName: \"kubernetes.io/projected/1d8cfeae-80df-4fa9-84f3-c597511944a7-kube-api-access-j58vr\") pod \"1d8cfeae-80df-4fa9-84f3-c597511944a7\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.085211 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-utilities\") pod \"1d8cfeae-80df-4fa9-84f3-c597511944a7\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.085335 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-catalog-content\") pod \"1d8cfeae-80df-4fa9-84f3-c597511944a7\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.086282 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-utilities" (OuterVolumeSpecName: "utilities") pod "1d8cfeae-80df-4fa9-84f3-c597511944a7" (UID: "1d8cfeae-80df-4fa9-84f3-c597511944a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.106679 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8cfeae-80df-4fa9-84f3-c597511944a7-kube-api-access-j58vr" (OuterVolumeSpecName: "kube-api-access-j58vr") pod "1d8cfeae-80df-4fa9-84f3-c597511944a7" (UID: "1d8cfeae-80df-4fa9-84f3-c597511944a7"). InnerVolumeSpecName "kube-api-access-j58vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.187215 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d8cfeae-80df-4fa9-84f3-c597511944a7" (UID: "1d8cfeae-80df-4fa9-84f3-c597511944a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.187796 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-catalog-content\") pod \"1d8cfeae-80df-4fa9-84f3-c597511944a7\" (UID: \"1d8cfeae-80df-4fa9-84f3-c597511944a7\") " Dec 09 09:28:34 crc kubenswrapper[4786]: W1209 09:28:34.187891 4786 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1d8cfeae-80df-4fa9-84f3-c597511944a7/volumes/kubernetes.io~empty-dir/catalog-content Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.187905 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d8cfeae-80df-4fa9-84f3-c597511944a7" (UID: "1d8cfeae-80df-4fa9-84f3-c597511944a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.188773 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j58vr\" (UniqueName: \"kubernetes.io/projected/1d8cfeae-80df-4fa9-84f3-c597511944a7-kube-api-access-j58vr\") on node \"crc\" DevicePath \"\"" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.189026 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.189087 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d8cfeae-80df-4fa9-84f3-c597511944a7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.477495 4786 generic.go:334] "Generic (PLEG): container finished" podID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerID="a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d" exitCode=0 Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.477563 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlcbh" event={"ID":"1d8cfeae-80df-4fa9-84f3-c597511944a7","Type":"ContainerDied","Data":"a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d"} Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.477678 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rlcbh" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.477893 4786 scope.go:117] "RemoveContainer" containerID="a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.477877 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rlcbh" event={"ID":"1d8cfeae-80df-4fa9-84f3-c597511944a7","Type":"ContainerDied","Data":"f2efd39db1e101e715d7909c0ec8798c5a2ed4593bae84bc85f69f2f6a4c76d5"} Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.507313 4786 scope.go:117] "RemoveContainer" containerID="3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.540039 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rlcbh"] Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.553572 4786 scope.go:117] "RemoveContainer" containerID="92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.554781 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rlcbh"] Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.604602 4786 scope.go:117] "RemoveContainer" containerID="a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d" Dec 09 09:28:34 crc kubenswrapper[4786]: E1209 09:28:34.605086 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d\": container with ID starting with a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d not found: ID does not exist" containerID="a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.605120 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d"} err="failed to get container status \"a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d\": rpc error: code = NotFound desc = could not find container \"a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d\": container with ID starting with a427b5db3f66c505a28baa037418f7620dfc343e68b206c3730e8920f8fd044d not found: ID does not exist" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.605146 4786 scope.go:117] "RemoveContainer" containerID="3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3" Dec 09 09:28:34 crc kubenswrapper[4786]: E1209 09:28:34.605533 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3\": container with ID starting with 3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3 not found: ID does not exist" containerID="3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.605561 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3"} err="failed to get container status \"3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3\": rpc error: code = NotFound desc = could not find container \"3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3\": container with ID starting with 3309f50f7f70a5610641f39931118023a9f629aff29642ed0b7eef4ade7753d3 not found: ID does not exist" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.605580 4786 scope.go:117] "RemoveContainer" containerID="92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1" Dec 09 09:28:34 crc kubenswrapper[4786]: E1209 09:28:34.605943 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1\": container with ID starting with 92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1 not found: ID does not exist" containerID="92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1" Dec 09 09:28:34 crc kubenswrapper[4786]: I1209 09:28:34.605972 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1"} err="failed to get container status \"92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1\": rpc error: code = NotFound desc = could not find container \"92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1\": container with ID starting with 92a7be2a4ede96ff00e45300864405f66b4f89abfa9f85660a732c1333bff5a1 not found: ID does not exist" Dec 09 09:28:35 crc kubenswrapper[4786]: I1209 09:28:35.194318 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:28:35 crc kubenswrapper[4786]: E1209 09:28:35.194689 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:28:35 crc kubenswrapper[4786]: I1209 09:28:35.199002 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8cfeae-80df-4fa9-84f3-c597511944a7" path="/var/lib/kubelet/pods/1d8cfeae-80df-4fa9-84f3-c597511944a7/volumes" Dec 09 09:28:49 crc kubenswrapper[4786]: I1209 09:28:49.190228 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:28:49 crc kubenswrapper[4786]: E1209 09:28:49.191244 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:28:57 crc kubenswrapper[4786]: I1209 09:28:57.697510 4786 generic.go:334] "Generic (PLEG): container finished" podID="e0e78e74-699b-442e-a5bd-6c598b2e0fb4" containerID="ddf1a0ca5c4bf5b4550de0b6205915660fcd57c20b300fe8426b323ca71555d6" exitCode=0 Dec 09 09:28:57 crc kubenswrapper[4786]: I1209 09:28:57.697628 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" event={"ID":"e0e78e74-699b-442e-a5bd-6c598b2e0fb4","Type":"ContainerDied","Data":"ddf1a0ca5c4bf5b4550de0b6205915660fcd57c20b300fe8426b323ca71555d6"} Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.170284 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.259415 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh4dw\" (UniqueName: \"kubernetes.io/projected/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-kube-api-access-bh4dw\") pod \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.259493 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-ssh-key\") pod \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.259556 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-secret-0\") pod \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.259619 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-inventory\") pod \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.259737 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-combined-ca-bundle\") pod \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\" (UID: \"e0e78e74-699b-442e-a5bd-6c598b2e0fb4\") " Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.265330 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e0e78e74-699b-442e-a5bd-6c598b2e0fb4" (UID: "e0e78e74-699b-442e-a5bd-6c598b2e0fb4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.268767 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-kube-api-access-bh4dw" (OuterVolumeSpecName: "kube-api-access-bh4dw") pod "e0e78e74-699b-442e-a5bd-6c598b2e0fb4" (UID: "e0e78e74-699b-442e-a5bd-6c598b2e0fb4"). InnerVolumeSpecName "kube-api-access-bh4dw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.290442 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-inventory" (OuterVolumeSpecName: "inventory") pod "e0e78e74-699b-442e-a5bd-6c598b2e0fb4" (UID: "e0e78e74-699b-442e-a5bd-6c598b2e0fb4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.290528 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "e0e78e74-699b-442e-a5bd-6c598b2e0fb4" (UID: "e0e78e74-699b-442e-a5bd-6c598b2e0fb4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.290861 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "e0e78e74-699b-442e-a5bd-6c598b2e0fb4" (UID: "e0e78e74-699b-442e-a5bd-6c598b2e0fb4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.361824 4786 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.361871 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh4dw\" (UniqueName: \"kubernetes.io/projected/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-kube-api-access-bh4dw\") on node \"crc\" DevicePath \"\"" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.361884 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.361895 4786 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.361907 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e0e78e74-699b-442e-a5bd-6c598b2e0fb4-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.720256 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" event={"ID":"e0e78e74-699b-442e-a5bd-6c598b2e0fb4","Type":"ContainerDied","Data":"f508abf6f4ac556347cff75abe07e4394bf1861f0d8148a84f9a9e2b2884726c"} Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.720342 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f508abf6f4ac556347cff75abe07e4394bf1861f0d8148a84f9a9e2b2884726c" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.720393 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-hslht" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.830277 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd"] Dec 09 09:28:59 crc kubenswrapper[4786]: E1209 09:28:59.830732 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerName="extract-utilities" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.830759 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerName="extract-utilities" Dec 09 09:28:59 crc kubenswrapper[4786]: E1209 09:28:59.830777 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e78e74-699b-442e-a5bd-6c598b2e0fb4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.830784 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e78e74-699b-442e-a5bd-6c598b2e0fb4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 09 09:28:59 crc kubenswrapper[4786]: E1209 09:28:59.830808 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerName="extract-content" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.830814 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerName="extract-content" Dec 09 09:28:59 crc kubenswrapper[4786]: E1209 09:28:59.830839 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerName="registry-server" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.830844 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerName="registry-server" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.831033 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e78e74-699b-442e-a5bd-6c598b2e0fb4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.831046 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8cfeae-80df-4fa9-84f3-c597511944a7" containerName="registry-server" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.831732 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.837114 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.837689 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.837815 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.837850 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.837906 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.837953 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.837967 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.856051 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd"] Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.871085 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.871173 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.871260 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.871319 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j8gm\" (UniqueName: \"kubernetes.io/projected/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-kube-api-access-9j8gm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.871353 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.871383 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.871500 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.871536 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.871557 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.973289 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.973333 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.973376 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.973912 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.974026 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.974101 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j8gm\" (UniqueName: \"kubernetes.io/projected/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-kube-api-access-9j8gm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.974152 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.974191 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.974259 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.976283 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.980547 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.980985 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.981590 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.982247 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.983031 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.983678 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:28:59 crc kubenswrapper[4786]: I1209 09:28:59.990781 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:29:00 crc kubenswrapper[4786]: I1209 09:29:00.003455 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j8gm\" (UniqueName: \"kubernetes.io/projected/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-kube-api-access-9j8gm\") pod \"nova-edpm-deployment-openstack-edpm-ipam-lnqtd\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:29:00 crc kubenswrapper[4786]: I1209 09:29:00.165218 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:29:00 crc kubenswrapper[4786]: I1209 09:29:00.703489 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd"] Dec 09 09:29:00 crc kubenswrapper[4786]: I1209 09:29:00.705610 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:29:00 crc kubenswrapper[4786]: I1209 09:29:00.730918 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" event={"ID":"f1d1af6a-883d-4d29-8d4d-b477e99c2df5","Type":"ContainerStarted","Data":"7a23842329a5cbc7c0dbca52f648dcd38e62c187527238cd96f08a9fd42cb762"} Dec 09 09:29:01 crc kubenswrapper[4786]: I1209 09:29:01.742248 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" event={"ID":"f1d1af6a-883d-4d29-8d4d-b477e99c2df5","Type":"ContainerStarted","Data":"94cb8d0ece9dedec4619cd972a4d771813deeed8f95fc1b53ac0c4b398209f89"} Dec 09 09:29:01 crc kubenswrapper[4786]: I1209 09:29:01.770195 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" podStartSLOduration=2.356674226 podStartE2EDuration="2.770171133s" podCreationTimestamp="2025-12-09 09:28:59 +0000 UTC" firstStartedPulling="2025-12-09 09:29:00.705309873 +0000 UTC m=+2706.588931099" lastFinishedPulling="2025-12-09 09:29:01.11880679 +0000 UTC m=+2707.002428006" observedRunningTime="2025-12-09 09:29:01.763243012 +0000 UTC m=+2707.646864268" watchObservedRunningTime="2025-12-09 09:29:01.770171133 +0000 UTC m=+2707.653792369" Dec 09 09:29:04 crc kubenswrapper[4786]: I1209 09:29:04.188541 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:29:04 crc kubenswrapper[4786]: I1209 09:29:04.777310 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"a1037d8b8d01c499e1b227ea9e68658b3ee1012303ade8df1a89fe76c43ee5ed"} Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.149981 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2"] Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.153363 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.158236 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.158616 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.164793 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2"] Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.281721 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xb6q\" (UniqueName: \"kubernetes.io/projected/774462f3-ab4d-46e2-9966-6d8752fcaa46-kube-api-access-5xb6q\") pod \"collect-profiles-29421210-fnxp2\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.282061 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774462f3-ab4d-46e2-9966-6d8752fcaa46-config-volume\") pod \"collect-profiles-29421210-fnxp2\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.282090 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774462f3-ab4d-46e2-9966-6d8752fcaa46-secret-volume\") pod \"collect-profiles-29421210-fnxp2\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.383727 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xb6q\" (UniqueName: \"kubernetes.io/projected/774462f3-ab4d-46e2-9966-6d8752fcaa46-kube-api-access-5xb6q\") pod \"collect-profiles-29421210-fnxp2\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.384122 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774462f3-ab4d-46e2-9966-6d8752fcaa46-config-volume\") pod \"collect-profiles-29421210-fnxp2\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.384361 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774462f3-ab4d-46e2-9966-6d8752fcaa46-secret-volume\") pod \"collect-profiles-29421210-fnxp2\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.384964 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774462f3-ab4d-46e2-9966-6d8752fcaa46-config-volume\") pod \"collect-profiles-29421210-fnxp2\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.390209 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774462f3-ab4d-46e2-9966-6d8752fcaa46-secret-volume\") pod \"collect-profiles-29421210-fnxp2\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.404618 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xb6q\" (UniqueName: \"kubernetes.io/projected/774462f3-ab4d-46e2-9966-6d8752fcaa46-kube-api-access-5xb6q\") pod \"collect-profiles-29421210-fnxp2\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.489069 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:00 crc kubenswrapper[4786]: I1209 09:30:00.969284 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2"] Dec 09 09:30:01 crc kubenswrapper[4786]: I1209 09:30:01.347336 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" event={"ID":"774462f3-ab4d-46e2-9966-6d8752fcaa46","Type":"ContainerStarted","Data":"a63b8d558ee16d233a90730ef51402cefb18fc6b291c1ed698ffcffb6ecb2547"} Dec 09 09:30:03 crc kubenswrapper[4786]: I1209 09:30:03.376002 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" event={"ID":"774462f3-ab4d-46e2-9966-6d8752fcaa46","Type":"ContainerStarted","Data":"1450ac8980307122aad29093453ee6c32dbb0673d11850bcf18f3000197d86ab"} Dec 09 09:30:03 crc kubenswrapper[4786]: I1209 09:30:03.402819 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" podStartSLOduration=3.4027956059999998 podStartE2EDuration="3.402795606s" podCreationTimestamp="2025-12-09 09:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:30:03.392001251 +0000 UTC m=+2769.275622477" watchObservedRunningTime="2025-12-09 09:30:03.402795606 +0000 UTC m=+2769.286416832" Dec 09 09:30:04 crc kubenswrapper[4786]: I1209 09:30:04.388665 4786 generic.go:334] "Generic (PLEG): container finished" podID="774462f3-ab4d-46e2-9966-6d8752fcaa46" containerID="1450ac8980307122aad29093453ee6c32dbb0673d11850bcf18f3000197d86ab" exitCode=0 Dec 09 09:30:04 crc kubenswrapper[4786]: I1209 09:30:04.388768 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" event={"ID":"774462f3-ab4d-46e2-9966-6d8752fcaa46","Type":"ContainerDied","Data":"1450ac8980307122aad29093453ee6c32dbb0673d11850bcf18f3000197d86ab"} Dec 09 09:30:05 crc kubenswrapper[4786]: I1209 09:30:05.831703 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:05 crc kubenswrapper[4786]: I1209 09:30:05.922600 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774462f3-ab4d-46e2-9966-6d8752fcaa46-secret-volume\") pod \"774462f3-ab4d-46e2-9966-6d8752fcaa46\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " Dec 09 09:30:05 crc kubenswrapper[4786]: I1209 09:30:05.922662 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774462f3-ab4d-46e2-9966-6d8752fcaa46-config-volume\") pod \"774462f3-ab4d-46e2-9966-6d8752fcaa46\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " Dec 09 09:30:05 crc kubenswrapper[4786]: I1209 09:30:05.922721 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5xb6q\" (UniqueName: \"kubernetes.io/projected/774462f3-ab4d-46e2-9966-6d8752fcaa46-kube-api-access-5xb6q\") pod \"774462f3-ab4d-46e2-9966-6d8752fcaa46\" (UID: \"774462f3-ab4d-46e2-9966-6d8752fcaa46\") " Dec 09 09:30:05 crc kubenswrapper[4786]: I1209 09:30:05.923558 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/774462f3-ab4d-46e2-9966-6d8752fcaa46-config-volume" (OuterVolumeSpecName: "config-volume") pod "774462f3-ab4d-46e2-9966-6d8752fcaa46" (UID: "774462f3-ab4d-46e2-9966-6d8752fcaa46"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:30:05 crc kubenswrapper[4786]: I1209 09:30:05.930759 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/774462f3-ab4d-46e2-9966-6d8752fcaa46-kube-api-access-5xb6q" (OuterVolumeSpecName: "kube-api-access-5xb6q") pod "774462f3-ab4d-46e2-9966-6d8752fcaa46" (UID: "774462f3-ab4d-46e2-9966-6d8752fcaa46"). InnerVolumeSpecName "kube-api-access-5xb6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:30:05 crc kubenswrapper[4786]: I1209 09:30:05.930884 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/774462f3-ab4d-46e2-9966-6d8752fcaa46-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "774462f3-ab4d-46e2-9966-6d8752fcaa46" (UID: "774462f3-ab4d-46e2-9966-6d8752fcaa46"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:30:06 crc kubenswrapper[4786]: I1209 09:30:06.024726 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/774462f3-ab4d-46e2-9966-6d8752fcaa46-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 09:30:06 crc kubenswrapper[4786]: I1209 09:30:06.024768 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/774462f3-ab4d-46e2-9966-6d8752fcaa46-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 09:30:06 crc kubenswrapper[4786]: I1209 09:30:06.024778 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5xb6q\" (UniqueName: \"kubernetes.io/projected/774462f3-ab4d-46e2-9966-6d8752fcaa46-kube-api-access-5xb6q\") on node \"crc\" DevicePath \"\"" Dec 09 09:30:06 crc kubenswrapper[4786]: I1209 09:30:06.409762 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" event={"ID":"774462f3-ab4d-46e2-9966-6d8752fcaa46","Type":"ContainerDied","Data":"a63b8d558ee16d233a90730ef51402cefb18fc6b291c1ed698ffcffb6ecb2547"} Dec 09 09:30:06 crc kubenswrapper[4786]: I1209 09:30:06.410082 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a63b8d558ee16d233a90730ef51402cefb18fc6b291c1ed698ffcffb6ecb2547" Dec 09 09:30:06 crc kubenswrapper[4786]: I1209 09:30:06.409846 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2" Dec 09 09:30:06 crc kubenswrapper[4786]: I1209 09:30:06.496039 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9"] Dec 09 09:30:06 crc kubenswrapper[4786]: I1209 09:30:06.508556 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421165-4fkk9"] Dec 09 09:30:07 crc kubenswrapper[4786]: I1209 09:30:07.208191 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="924456b2-4aa3-4e7c-8d80-667783b96551" path="/var/lib/kubelet/pods/924456b2-4aa3-4e7c-8d80-667783b96551/volumes" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.755398 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qh5q4"] Dec 09 09:30:11 crc kubenswrapper[4786]: E1209 09:30:11.757282 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="774462f3-ab4d-46e2-9966-6d8752fcaa46" containerName="collect-profiles" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.757299 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="774462f3-ab4d-46e2-9966-6d8752fcaa46" containerName="collect-profiles" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.757514 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="774462f3-ab4d-46e2-9966-6d8752fcaa46" containerName="collect-profiles" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.759181 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.766996 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh5q4"] Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.843114 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-catalog-content\") pod \"redhat-marketplace-qh5q4\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.843264 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5l9z\" (UniqueName: \"kubernetes.io/projected/cc9a4b17-4e26-4868-af57-a79d501c72a9-kube-api-access-h5l9z\") pod \"redhat-marketplace-qh5q4\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.843332 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-utilities\") pod \"redhat-marketplace-qh5q4\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.945447 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-catalog-content\") pod \"redhat-marketplace-qh5q4\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.945566 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5l9z\" (UniqueName: \"kubernetes.io/projected/cc9a4b17-4e26-4868-af57-a79d501c72a9-kube-api-access-h5l9z\") pod \"redhat-marketplace-qh5q4\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.945688 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-utilities\") pod \"redhat-marketplace-qh5q4\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.946364 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-utilities\") pod \"redhat-marketplace-qh5q4\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.946384 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-catalog-content\") pod \"redhat-marketplace-qh5q4\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:11 crc kubenswrapper[4786]: I1209 09:30:11.967667 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5l9z\" (UniqueName: \"kubernetes.io/projected/cc9a4b17-4e26-4868-af57-a79d501c72a9-kube-api-access-h5l9z\") pod \"redhat-marketplace-qh5q4\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:12 crc kubenswrapper[4786]: I1209 09:30:12.087065 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:12 crc kubenswrapper[4786]: I1209 09:30:12.663980 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh5q4"] Dec 09 09:30:13 crc kubenswrapper[4786]: I1209 09:30:13.480732 4786 generic.go:334] "Generic (PLEG): container finished" podID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerID="2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6" exitCode=0 Dec 09 09:30:13 crc kubenswrapper[4786]: I1209 09:30:13.480950 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh5q4" event={"ID":"cc9a4b17-4e26-4868-af57-a79d501c72a9","Type":"ContainerDied","Data":"2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6"} Dec 09 09:30:13 crc kubenswrapper[4786]: I1209 09:30:13.481139 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh5q4" event={"ID":"cc9a4b17-4e26-4868-af57-a79d501c72a9","Type":"ContainerStarted","Data":"2213f8482a5f64670d1532e3a72a0e4995543fbf8a21a4f773574ba152cbad74"} Dec 09 09:30:14 crc kubenswrapper[4786]: I1209 09:30:14.494881 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh5q4" event={"ID":"cc9a4b17-4e26-4868-af57-a79d501c72a9","Type":"ContainerStarted","Data":"dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f"} Dec 09 09:30:15 crc kubenswrapper[4786]: I1209 09:30:15.510396 4786 generic.go:334] "Generic (PLEG): container finished" podID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerID="dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f" exitCode=0 Dec 09 09:30:15 crc kubenswrapper[4786]: I1209 09:30:15.510470 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh5q4" event={"ID":"cc9a4b17-4e26-4868-af57-a79d501c72a9","Type":"ContainerDied","Data":"dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f"} Dec 09 09:30:16 crc kubenswrapper[4786]: I1209 09:30:16.528221 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh5q4" event={"ID":"cc9a4b17-4e26-4868-af57-a79d501c72a9","Type":"ContainerStarted","Data":"b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1"} Dec 09 09:30:16 crc kubenswrapper[4786]: I1209 09:30:16.558859 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qh5q4" podStartSLOduration=2.876969508 podStartE2EDuration="5.558836038s" podCreationTimestamp="2025-12-09 09:30:11 +0000 UTC" firstStartedPulling="2025-12-09 09:30:13.483651856 +0000 UTC m=+2779.367273082" lastFinishedPulling="2025-12-09 09:30:16.165518396 +0000 UTC m=+2782.049139612" observedRunningTime="2025-12-09 09:30:16.550235217 +0000 UTC m=+2782.433856463" watchObservedRunningTime="2025-12-09 09:30:16.558836038 +0000 UTC m=+2782.442457264" Dec 09 09:30:22 crc kubenswrapper[4786]: I1209 09:30:22.087927 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:22 crc kubenswrapper[4786]: I1209 09:30:22.088586 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:22 crc kubenswrapper[4786]: I1209 09:30:22.175744 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:22 crc kubenswrapper[4786]: I1209 09:30:22.631874 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:22 crc kubenswrapper[4786]: I1209 09:30:22.703301 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh5q4"] Dec 09 09:30:24 crc kubenswrapper[4786]: I1209 09:30:24.603535 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qh5q4" podUID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerName="registry-server" containerID="cri-o://b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1" gracePeriod=2 Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.210784 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.262861 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-utilities\") pod \"cc9a4b17-4e26-4868-af57-a79d501c72a9\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.263403 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5l9z\" (UniqueName: \"kubernetes.io/projected/cc9a4b17-4e26-4868-af57-a79d501c72a9-kube-api-access-h5l9z\") pod \"cc9a4b17-4e26-4868-af57-a79d501c72a9\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.263519 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-catalog-content\") pod \"cc9a4b17-4e26-4868-af57-a79d501c72a9\" (UID: \"cc9a4b17-4e26-4868-af57-a79d501c72a9\") " Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.264276 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-utilities" (OuterVolumeSpecName: "utilities") pod "cc9a4b17-4e26-4868-af57-a79d501c72a9" (UID: "cc9a4b17-4e26-4868-af57-a79d501c72a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.284721 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc9a4b17-4e26-4868-af57-a79d501c72a9-kube-api-access-h5l9z" (OuterVolumeSpecName: "kube-api-access-h5l9z") pod "cc9a4b17-4e26-4868-af57-a79d501c72a9" (UID: "cc9a4b17-4e26-4868-af57-a79d501c72a9"). InnerVolumeSpecName "kube-api-access-h5l9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.290034 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc9a4b17-4e26-4868-af57-a79d501c72a9" (UID: "cc9a4b17-4e26-4868-af57-a79d501c72a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.365567 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5l9z\" (UniqueName: \"kubernetes.io/projected/cc9a4b17-4e26-4868-af57-a79d501c72a9-kube-api-access-h5l9z\") on node \"crc\" DevicePath \"\"" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.365601 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.365612 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc9a4b17-4e26-4868-af57-a79d501c72a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.617476 4786 generic.go:334] "Generic (PLEG): container finished" podID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerID="b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1" exitCode=0 Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.617531 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh5q4" event={"ID":"cc9a4b17-4e26-4868-af57-a79d501c72a9","Type":"ContainerDied","Data":"b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1"} Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.617567 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qh5q4" event={"ID":"cc9a4b17-4e26-4868-af57-a79d501c72a9","Type":"ContainerDied","Data":"2213f8482a5f64670d1532e3a72a0e4995543fbf8a21a4f773574ba152cbad74"} Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.617587 4786 scope.go:117] "RemoveContainer" containerID="b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.617758 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qh5q4" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.660354 4786 scope.go:117] "RemoveContainer" containerID="dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.673439 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh5q4"] Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.684216 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qh5q4"] Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.694873 4786 scope.go:117] "RemoveContainer" containerID="2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.735738 4786 scope.go:117] "RemoveContainer" containerID="b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1" Dec 09 09:30:25 crc kubenswrapper[4786]: E1209 09:30:25.736447 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1\": container with ID starting with b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1 not found: ID does not exist" containerID="b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.736482 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1"} err="failed to get container status \"b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1\": rpc error: code = NotFound desc = could not find container \"b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1\": container with ID starting with b3ddf605debdb86b882967ad8663537feac5646eb2c764e538d4671bb97d11c1 not found: ID does not exist" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.736506 4786 scope.go:117] "RemoveContainer" containerID="dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f" Dec 09 09:30:25 crc kubenswrapper[4786]: E1209 09:30:25.736962 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f\": container with ID starting with dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f not found: ID does not exist" containerID="dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.736990 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f"} err="failed to get container status \"dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f\": rpc error: code = NotFound desc = could not find container \"dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f\": container with ID starting with dd38e284313be9252f44bb7281b2cf48a782d989b2d8332f142616cdae80190f not found: ID does not exist" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.737006 4786 scope.go:117] "RemoveContainer" containerID="2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6" Dec 09 09:30:25 crc kubenswrapper[4786]: E1209 09:30:25.737206 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6\": container with ID starting with 2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6 not found: ID does not exist" containerID="2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6" Dec 09 09:30:25 crc kubenswrapper[4786]: I1209 09:30:25.737230 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6"} err="failed to get container status \"2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6\": rpc error: code = NotFound desc = could not find container \"2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6\": container with ID starting with 2939f47330fa808c6c27af9fa4895426a467b54d5a28b35fb195ae4855849da6 not found: ID does not exist" Dec 09 09:30:27 crc kubenswrapper[4786]: I1209 09:30:27.199166 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc9a4b17-4e26-4868-af57-a79d501c72a9" path="/var/lib/kubelet/pods/cc9a4b17-4e26-4868-af57-a79d501c72a9/volumes" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.309806 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5gct7"] Dec 09 09:30:42 crc kubenswrapper[4786]: E1209 09:30:42.310821 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerName="registry-server" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.310838 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerName="registry-server" Dec 09 09:30:42 crc kubenswrapper[4786]: E1209 09:30:42.310861 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerName="extract-utilities" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.310869 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerName="extract-utilities" Dec 09 09:30:42 crc kubenswrapper[4786]: E1209 09:30:42.310901 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerName="extract-content" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.310910 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerName="extract-content" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.311191 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc9a4b17-4e26-4868-af57-a79d501c72a9" containerName="registry-server" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.313063 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.327394 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5gct7"] Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.407603 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9-catalog-content\") pod \"community-operators-5gct7\" (UID: \"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9\") " pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.408820 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9-utilities\") pod \"community-operators-5gct7\" (UID: \"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9\") " pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.408901 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbx2n\" (UniqueName: \"kubernetes.io/projected/b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9-kube-api-access-kbx2n\") pod \"community-operators-5gct7\" (UID: \"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9\") " pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.511353 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9-utilities\") pod \"community-operators-5gct7\" (UID: \"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9\") " pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.511483 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbx2n\" (UniqueName: \"kubernetes.io/projected/b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9-kube-api-access-kbx2n\") pod \"community-operators-5gct7\" (UID: \"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9\") " pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.512058 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9-catalog-content\") pod \"community-operators-5gct7\" (UID: \"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9\") " pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.512171 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9-utilities\") pod \"community-operators-5gct7\" (UID: \"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9\") " pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.512638 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9-catalog-content\") pod \"community-operators-5gct7\" (UID: \"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9\") " pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.536943 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbx2n\" (UniqueName: \"kubernetes.io/projected/b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9-kube-api-access-kbx2n\") pod \"community-operators-5gct7\" (UID: \"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9\") " pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:42 crc kubenswrapper[4786]: I1209 09:30:42.644888 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:43 crc kubenswrapper[4786]: I1209 09:30:43.333176 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5gct7"] Dec 09 09:30:43 crc kubenswrapper[4786]: I1209 09:30:43.800980 4786 generic.go:334] "Generic (PLEG): container finished" podID="b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9" containerID="3ed9d0ae9ca157aeaa4f729e5028ecca1b1322a909671caa6575a30e15234026" exitCode=0 Dec 09 09:30:43 crc kubenswrapper[4786]: I1209 09:30:43.801042 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gct7" event={"ID":"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9","Type":"ContainerDied","Data":"3ed9d0ae9ca157aeaa4f729e5028ecca1b1322a909671caa6575a30e15234026"} Dec 09 09:30:43 crc kubenswrapper[4786]: I1209 09:30:43.801341 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gct7" event={"ID":"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9","Type":"ContainerStarted","Data":"e0b091a92b88d0f6a572ff5916e694b565ab16edb754e5c7d66264c74e83ea33"} Dec 09 09:30:48 crc kubenswrapper[4786]: I1209 09:30:48.857471 4786 generic.go:334] "Generic (PLEG): container finished" podID="b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9" containerID="fc299756099a83c0ffcfd5bbd6b85531a4692c773e7e1a962aaa2357e242135a" exitCode=0 Dec 09 09:30:48 crc kubenswrapper[4786]: I1209 09:30:48.857579 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gct7" event={"ID":"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9","Type":"ContainerDied","Data":"fc299756099a83c0ffcfd5bbd6b85531a4692c773e7e1a962aaa2357e242135a"} Dec 09 09:30:49 crc kubenswrapper[4786]: I1209 09:30:49.870775 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5gct7" event={"ID":"b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9","Type":"ContainerStarted","Data":"5bf1ac3226718a75b28942ba4c4b464294d622e4df8391d8f14639a0fb69cff3"} Dec 09 09:30:49 crc kubenswrapper[4786]: I1209 09:30:49.894385 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5gct7" podStartSLOduration=2.436816591 podStartE2EDuration="7.894362319s" podCreationTimestamp="2025-12-09 09:30:42 +0000 UTC" firstStartedPulling="2025-12-09 09:30:43.804240529 +0000 UTC m=+2809.687861765" lastFinishedPulling="2025-12-09 09:30:49.261786257 +0000 UTC m=+2815.145407493" observedRunningTime="2025-12-09 09:30:49.887668585 +0000 UTC m=+2815.771289811" watchObservedRunningTime="2025-12-09 09:30:49.894362319 +0000 UTC m=+2815.777983545" Dec 09 09:30:52 crc kubenswrapper[4786]: I1209 09:30:52.651235 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:52 crc kubenswrapper[4786]: I1209 09:30:52.653158 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:30:52 crc kubenswrapper[4786]: I1209 09:30:52.743351 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:31:02 crc kubenswrapper[4786]: I1209 09:31:02.704435 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5gct7" Dec 09 09:31:02 crc kubenswrapper[4786]: I1209 09:31:02.841950 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5gct7"] Dec 09 09:31:02 crc kubenswrapper[4786]: I1209 09:31:02.912339 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftk6v"] Dec 09 09:31:02 crc kubenswrapper[4786]: I1209 09:31:02.912603 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ftk6v" podUID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerName="registry-server" containerID="cri-o://5a6b30ecb5fde5a7d5aca04b3965899680a669ccafb10a242a44bfb8dddf07c1" gracePeriod=2 Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.067018 4786 generic.go:334] "Generic (PLEG): container finished" podID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerID="5a6b30ecb5fde5a7d5aca04b3965899680a669ccafb10a242a44bfb8dddf07c1" exitCode=0 Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.067624 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftk6v" event={"ID":"40c9dcfe-3d88-4560-a91e-639f6e0b661e","Type":"ContainerDied","Data":"5a6b30ecb5fde5a7d5aca04b3965899680a669ccafb10a242a44bfb8dddf07c1"} Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.421853 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftk6v" Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.546320 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-catalog-content\") pod \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.546474 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cfft\" (UniqueName: \"kubernetes.io/projected/40c9dcfe-3d88-4560-a91e-639f6e0b661e-kube-api-access-4cfft\") pod \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.546522 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-utilities\") pod \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\" (UID: \"40c9dcfe-3d88-4560-a91e-639f6e0b661e\") " Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.550273 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-utilities" (OuterVolumeSpecName: "utilities") pod "40c9dcfe-3d88-4560-a91e-639f6e0b661e" (UID: "40c9dcfe-3d88-4560-a91e-639f6e0b661e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.572712 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c9dcfe-3d88-4560-a91e-639f6e0b661e-kube-api-access-4cfft" (OuterVolumeSpecName: "kube-api-access-4cfft") pod "40c9dcfe-3d88-4560-a91e-639f6e0b661e" (UID: "40c9dcfe-3d88-4560-a91e-639f6e0b661e"). InnerVolumeSpecName "kube-api-access-4cfft". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.650360 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cfft\" (UniqueName: \"kubernetes.io/projected/40c9dcfe-3d88-4560-a91e-639f6e0b661e-kube-api-access-4cfft\") on node \"crc\" DevicePath \"\"" Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.650714 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.705341 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40c9dcfe-3d88-4560-a91e-639f6e0b661e" (UID: "40c9dcfe-3d88-4560-a91e-639f6e0b661e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:31:03 crc kubenswrapper[4786]: I1209 09:31:03.752969 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c9dcfe-3d88-4560-a91e-639f6e0b661e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:31:04 crc kubenswrapper[4786]: I1209 09:31:04.082763 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ftk6v" event={"ID":"40c9dcfe-3d88-4560-a91e-639f6e0b661e","Type":"ContainerDied","Data":"bac33852918e7efe2822ce72e3bc931372bc2d889d96371874ae9d3af921b8e4"} Dec 09 09:31:04 crc kubenswrapper[4786]: I1209 09:31:04.082806 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ftk6v" Dec 09 09:31:04 crc kubenswrapper[4786]: I1209 09:31:04.082818 4786 scope.go:117] "RemoveContainer" containerID="5a6b30ecb5fde5a7d5aca04b3965899680a669ccafb10a242a44bfb8dddf07c1" Dec 09 09:31:04 crc kubenswrapper[4786]: I1209 09:31:04.120637 4786 scope.go:117] "RemoveContainer" containerID="49f7d371ce9c7448bfde1ba88cce63231e9617b48ab9770404880b7882eaf1c5" Dec 09 09:31:04 crc kubenswrapper[4786]: I1209 09:31:04.126972 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ftk6v"] Dec 09 09:31:04 crc kubenswrapper[4786]: I1209 09:31:04.137936 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ftk6v"] Dec 09 09:31:04 crc kubenswrapper[4786]: I1209 09:31:04.152288 4786 scope.go:117] "RemoveContainer" containerID="1cb9b47d638c5a16b5f32823174061b26d726c6a115a9c9e13d60deca6b8bdf3" Dec 09 09:31:04 crc kubenswrapper[4786]: I1209 09:31:04.758761 4786 scope.go:117] "RemoveContainer" containerID="5fb10c980b3f78c7c9d6d2b633d6b3796cc52a3943c5b0fd00aed9b7b4c0cec3" Dec 09 09:31:05 crc kubenswrapper[4786]: I1209 09:31:05.203091 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" path="/var/lib/kubelet/pods/40c9dcfe-3d88-4560-a91e-639f6e0b661e/volumes" Dec 09 09:31:24 crc kubenswrapper[4786]: I1209 09:31:24.988994 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:31:24 crc kubenswrapper[4786]: I1209 09:31:24.989667 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:31:54 crc kubenswrapper[4786]: I1209 09:31:54.989176 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:31:54 crc kubenswrapper[4786]: I1209 09:31:54.989849 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:32:02 crc kubenswrapper[4786]: I1209 09:32:02.709195 4786 generic.go:334] "Generic (PLEG): container finished" podID="f1d1af6a-883d-4d29-8d4d-b477e99c2df5" containerID="94cb8d0ece9dedec4619cd972a4d771813deeed8f95fc1b53ac0c4b398209f89" exitCode=0 Dec 09 09:32:02 crc kubenswrapper[4786]: I1209 09:32:02.709289 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" event={"ID":"f1d1af6a-883d-4d29-8d4d-b477e99c2df5","Type":"ContainerDied","Data":"94cb8d0ece9dedec4619cd972a4d771813deeed8f95fc1b53ac0c4b398209f89"} Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.196154 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.261639 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-0\") pod \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.261889 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-inventory\") pod \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.262001 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-ssh-key\") pod \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.271009 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j8gm\" (UniqueName: \"kubernetes.io/projected/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-kube-api-access-9j8gm\") pod \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.280163 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-kube-api-access-9j8gm" (OuterVolumeSpecName: "kube-api-access-9j8gm") pod "f1d1af6a-883d-4d29-8d4d-b477e99c2df5" (UID: "f1d1af6a-883d-4d29-8d4d-b477e99c2df5"). InnerVolumeSpecName "kube-api-access-9j8gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.306610 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-inventory" (OuterVolumeSpecName: "inventory") pod "f1d1af6a-883d-4d29-8d4d-b477e99c2df5" (UID: "f1d1af6a-883d-4d29-8d4d-b477e99c2df5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.308513 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f1d1af6a-883d-4d29-8d4d-b477e99c2df5" (UID: "f1d1af6a-883d-4d29-8d4d-b477e99c2df5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.308967 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f1d1af6a-883d-4d29-8d4d-b477e99c2df5" (UID: "f1d1af6a-883d-4d29-8d4d-b477e99c2df5"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.373979 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-combined-ca-bundle\") pod \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.374078 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-extra-config-0\") pod \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.374132 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-0\") pod \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.374152 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-1\") pod \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.374193 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-1\") pod \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\" (UID: \"f1d1af6a-883d-4d29-8d4d-b477e99c2df5\") " Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.374549 4786 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.374570 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.374579 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.374589 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j8gm\" (UniqueName: \"kubernetes.io/projected/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-kube-api-access-9j8gm\") on node \"crc\" DevicePath \"\"" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.380222 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f1d1af6a-883d-4d29-8d4d-b477e99c2df5" (UID: "f1d1af6a-883d-4d29-8d4d-b477e99c2df5"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.410614 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f1d1af6a-883d-4d29-8d4d-b477e99c2df5" (UID: "f1d1af6a-883d-4d29-8d4d-b477e99c2df5"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.419284 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f1d1af6a-883d-4d29-8d4d-b477e99c2df5" (UID: "f1d1af6a-883d-4d29-8d4d-b477e99c2df5"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.421318 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f1d1af6a-883d-4d29-8d4d-b477e99c2df5" (UID: "f1d1af6a-883d-4d29-8d4d-b477e99c2df5"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.422573 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f1d1af6a-883d-4d29-8d4d-b477e99c2df5" (UID: "f1d1af6a-883d-4d29-8d4d-b477e99c2df5"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.479614 4786 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.479649 4786 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.479667 4786 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.479676 4786 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.479685 4786 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f1d1af6a-883d-4d29-8d4d-b477e99c2df5-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.732775 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" event={"ID":"f1d1af6a-883d-4d29-8d4d-b477e99c2df5","Type":"ContainerDied","Data":"7a23842329a5cbc7c0dbca52f648dcd38e62c187527238cd96f08a9fd42cb762"} Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.732817 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a23842329a5cbc7c0dbca52f648dcd38e62c187527238cd96f08a9fd42cb762" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.732841 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-lnqtd" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.879451 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv"] Dec 09 09:32:04 crc kubenswrapper[4786]: E1209 09:32:04.879802 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerName="extract-content" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.879820 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerName="extract-content" Dec 09 09:32:04 crc kubenswrapper[4786]: E1209 09:32:04.879843 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerName="extract-utilities" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.879851 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerName="extract-utilities" Dec 09 09:32:04 crc kubenswrapper[4786]: E1209 09:32:04.879862 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerName="registry-server" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.879867 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerName="registry-server" Dec 09 09:32:04 crc kubenswrapper[4786]: E1209 09:32:04.879885 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f1d1af6a-883d-4d29-8d4d-b477e99c2df5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.879891 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f1d1af6a-883d-4d29-8d4d-b477e99c2df5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.880090 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f1d1af6a-883d-4d29-8d4d-b477e99c2df5" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.880111 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c9dcfe-3d88-4560-a91e-639f6e0b661e" containerName="registry-server" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.881813 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.886103 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.886278 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-247ss" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.886695 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.890270 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.890662 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv"] Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.890973 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.989151 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.989575 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.989876 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.989974 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.990065 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.990246 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:04 crc kubenswrapper[4786]: I1209 09:32:04.990316 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlfsx\" (UniqueName: \"kubernetes.io/projected/b1717b33-b022-49ed-94fc-2160247ac3bd-kube-api-access-mlfsx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.093033 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.093153 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.093192 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.093228 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.093311 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.093359 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlfsx\" (UniqueName: \"kubernetes.io/projected/b1717b33-b022-49ed-94fc-2160247ac3bd-kube-api-access-mlfsx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.093450 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.099812 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.102076 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.102218 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.102867 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.103190 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.105442 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.117700 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlfsx\" (UniqueName: \"kubernetes.io/projected/b1717b33-b022-49ed-94fc-2160247ac3bd-kube-api-access-mlfsx\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.214536 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:32:05 crc kubenswrapper[4786]: I1209 09:32:05.784073 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv"] Dec 09 09:32:06 crc kubenswrapper[4786]: I1209 09:32:06.752663 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" event={"ID":"b1717b33-b022-49ed-94fc-2160247ac3bd","Type":"ContainerStarted","Data":"c5821f6fa57e850cf9fd3a553b0c5e41d2f9129e018043dcb7862dff21c89e87"} Dec 09 09:32:06 crc kubenswrapper[4786]: I1209 09:32:06.753112 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" event={"ID":"b1717b33-b022-49ed-94fc-2160247ac3bd","Type":"ContainerStarted","Data":"d15871a014d5ed72948a6c3fa53c7b1c2d4f89b09dbf7dbfe1145eb8a198f76c"} Dec 09 09:32:06 crc kubenswrapper[4786]: I1209 09:32:06.772680 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" podStartSLOduration=2.367806285 podStartE2EDuration="2.772663082s" podCreationTimestamp="2025-12-09 09:32:04 +0000 UTC" firstStartedPulling="2025-12-09 09:32:05.791472637 +0000 UTC m=+2891.675093883" lastFinishedPulling="2025-12-09 09:32:06.196329444 +0000 UTC m=+2892.079950680" observedRunningTime="2025-12-09 09:32:06.767413052 +0000 UTC m=+2892.651034288" watchObservedRunningTime="2025-12-09 09:32:06.772663082 +0000 UTC m=+2892.656284298" Dec 09 09:32:24 crc kubenswrapper[4786]: I1209 09:32:24.989469 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:32:24 crc kubenswrapper[4786]: I1209 09:32:24.990058 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:32:24 crc kubenswrapper[4786]: I1209 09:32:24.990114 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:32:24 crc kubenswrapper[4786]: I1209 09:32:24.991007 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a1037d8b8d01c499e1b227ea9e68658b3ee1012303ade8df1a89fe76c43ee5ed"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:32:24 crc kubenswrapper[4786]: I1209 09:32:24.991072 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://a1037d8b8d01c499e1b227ea9e68658b3ee1012303ade8df1a89fe76c43ee5ed" gracePeriod=600 Dec 09 09:32:25 crc kubenswrapper[4786]: I1209 09:32:25.952216 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="a1037d8b8d01c499e1b227ea9e68658b3ee1012303ade8df1a89fe76c43ee5ed" exitCode=0 Dec 09 09:32:25 crc kubenswrapper[4786]: I1209 09:32:25.952296 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"a1037d8b8d01c499e1b227ea9e68658b3ee1012303ade8df1a89fe76c43ee5ed"} Dec 09 09:32:25 crc kubenswrapper[4786]: I1209 09:32:25.952802 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef"} Dec 09 09:32:25 crc kubenswrapper[4786]: I1209 09:32:25.952821 4786 scope.go:117] "RemoveContainer" containerID="10574053a93a4c7cf48dacebbc753cce1f320aa5c82046a5334cb5c07c4a4872" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.531135 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4p7dd"] Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.534609 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.558658 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4p7dd"] Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.687658 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-utilities\") pod \"certified-operators-4p7dd\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.687971 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-catalog-content\") pod \"certified-operators-4p7dd\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.688096 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz9pk\" (UniqueName: \"kubernetes.io/projected/a675f046-ad88-4161-a978-f8774140dd10-kube-api-access-fz9pk\") pod \"certified-operators-4p7dd\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.793752 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-catalog-content\") pod \"certified-operators-4p7dd\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.794067 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz9pk\" (UniqueName: \"kubernetes.io/projected/a675f046-ad88-4161-a978-f8774140dd10-kube-api-access-fz9pk\") pod \"certified-operators-4p7dd\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.794528 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-utilities\") pod \"certified-operators-4p7dd\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.795118 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-catalog-content\") pod \"certified-operators-4p7dd\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.795117 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-utilities\") pod \"certified-operators-4p7dd\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.825008 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz9pk\" (UniqueName: \"kubernetes.io/projected/a675f046-ad88-4161-a978-f8774140dd10-kube-api-access-fz9pk\") pod \"certified-operators-4p7dd\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:35 crc kubenswrapper[4786]: I1209 09:34:35.857748 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:36 crc kubenswrapper[4786]: I1209 09:34:36.376016 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4p7dd"] Dec 09 09:34:37 crc kubenswrapper[4786]: I1209 09:34:37.313233 4786 generic.go:334] "Generic (PLEG): container finished" podID="a675f046-ad88-4161-a978-f8774140dd10" containerID="9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084" exitCode=0 Dec 09 09:34:37 crc kubenswrapper[4786]: I1209 09:34:37.313274 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7dd" event={"ID":"a675f046-ad88-4161-a978-f8774140dd10","Type":"ContainerDied","Data":"9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084"} Dec 09 09:34:37 crc kubenswrapper[4786]: I1209 09:34:37.313560 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7dd" event={"ID":"a675f046-ad88-4161-a978-f8774140dd10","Type":"ContainerStarted","Data":"d42dddb3983abe08cdd7abef7a2d28ec18ea8af4349f244952c21c1d21799566"} Dec 09 09:34:37 crc kubenswrapper[4786]: I1209 09:34:37.315379 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:34:38 crc kubenswrapper[4786]: I1209 09:34:38.326288 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7dd" event={"ID":"a675f046-ad88-4161-a978-f8774140dd10","Type":"ContainerStarted","Data":"e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c"} Dec 09 09:34:39 crc kubenswrapper[4786]: I1209 09:34:39.337380 4786 generic.go:334] "Generic (PLEG): container finished" podID="a675f046-ad88-4161-a978-f8774140dd10" containerID="e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c" exitCode=0 Dec 09 09:34:39 crc kubenswrapper[4786]: I1209 09:34:39.337447 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7dd" event={"ID":"a675f046-ad88-4161-a978-f8774140dd10","Type":"ContainerDied","Data":"e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c"} Dec 09 09:34:40 crc kubenswrapper[4786]: I1209 09:34:40.369048 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7dd" event={"ID":"a675f046-ad88-4161-a978-f8774140dd10","Type":"ContainerStarted","Data":"e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8"} Dec 09 09:34:40 crc kubenswrapper[4786]: I1209 09:34:40.392787 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4p7dd" podStartSLOduration=2.857815398 podStartE2EDuration="5.392771541s" podCreationTimestamp="2025-12-09 09:34:35 +0000 UTC" firstStartedPulling="2025-12-09 09:34:37.315144 +0000 UTC m=+3043.198765226" lastFinishedPulling="2025-12-09 09:34:39.850100143 +0000 UTC m=+3045.733721369" observedRunningTime="2025-12-09 09:34:40.386879917 +0000 UTC m=+3046.270501143" watchObservedRunningTime="2025-12-09 09:34:40.392771541 +0000 UTC m=+3046.276392767" Dec 09 09:34:42 crc kubenswrapper[4786]: I1209 09:34:42.393165 4786 generic.go:334] "Generic (PLEG): container finished" podID="b1717b33-b022-49ed-94fc-2160247ac3bd" containerID="c5821f6fa57e850cf9fd3a553b0c5e41d2f9129e018043dcb7862dff21c89e87" exitCode=0 Dec 09 09:34:42 crc kubenswrapper[4786]: I1209 09:34:42.393287 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" event={"ID":"b1717b33-b022-49ed-94fc-2160247ac3bd","Type":"ContainerDied","Data":"c5821f6fa57e850cf9fd3a553b0c5e41d2f9129e018043dcb7862dff21c89e87"} Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.869129 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.883962 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-0\") pod \"b1717b33-b022-49ed-94fc-2160247ac3bd\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.884066 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-inventory\") pod \"b1717b33-b022-49ed-94fc-2160247ac3bd\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.884192 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-telemetry-combined-ca-bundle\") pod \"b1717b33-b022-49ed-94fc-2160247ac3bd\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.884259 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlfsx\" (UniqueName: \"kubernetes.io/projected/b1717b33-b022-49ed-94fc-2160247ac3bd-kube-api-access-mlfsx\") pod \"b1717b33-b022-49ed-94fc-2160247ac3bd\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.884340 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-1\") pod \"b1717b33-b022-49ed-94fc-2160247ac3bd\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.884392 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-2\") pod \"b1717b33-b022-49ed-94fc-2160247ac3bd\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.884700 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ssh-key\") pod \"b1717b33-b022-49ed-94fc-2160247ac3bd\" (UID: \"b1717b33-b022-49ed-94fc-2160247ac3bd\") " Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.891926 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1717b33-b022-49ed-94fc-2160247ac3bd-kube-api-access-mlfsx" (OuterVolumeSpecName: "kube-api-access-mlfsx") pod "b1717b33-b022-49ed-94fc-2160247ac3bd" (UID: "b1717b33-b022-49ed-94fc-2160247ac3bd"). InnerVolumeSpecName "kube-api-access-mlfsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.895550 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "b1717b33-b022-49ed-94fc-2160247ac3bd" (UID: "b1717b33-b022-49ed-94fc-2160247ac3bd"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.986444 4786 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.986475 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlfsx\" (UniqueName: \"kubernetes.io/projected/b1717b33-b022-49ed-94fc-2160247ac3bd-kube-api-access-mlfsx\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.986587 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "b1717b33-b022-49ed-94fc-2160247ac3bd" (UID: "b1717b33-b022-49ed-94fc-2160247ac3bd"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.990591 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b1717b33-b022-49ed-94fc-2160247ac3bd" (UID: "b1717b33-b022-49ed-94fc-2160247ac3bd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:34:43 crc kubenswrapper[4786]: I1209 09:34:43.991733 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-inventory" (OuterVolumeSpecName: "inventory") pod "b1717b33-b022-49ed-94fc-2160247ac3bd" (UID: "b1717b33-b022-49ed-94fc-2160247ac3bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.006475 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "b1717b33-b022-49ed-94fc-2160247ac3bd" (UID: "b1717b33-b022-49ed-94fc-2160247ac3bd"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.014262 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "b1717b33-b022-49ed-94fc-2160247ac3bd" (UID: "b1717b33-b022-49ed-94fc-2160247ac3bd"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.089092 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.089125 4786 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.089137 4786 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-inventory\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.089147 4786 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.089155 4786 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/b1717b33-b022-49ed-94fc-2160247ac3bd-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.419624 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" event={"ID":"b1717b33-b022-49ed-94fc-2160247ac3bd","Type":"ContainerDied","Data":"d15871a014d5ed72948a6c3fa53c7b1c2d4f89b09dbf7dbfe1145eb8a198f76c"} Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.419690 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d15871a014d5ed72948a6c3fa53c7b1c2d4f89b09dbf7dbfe1145eb8a198f76c" Dec 09 09:34:44 crc kubenswrapper[4786]: I1209 09:34:44.419691 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv" Dec 09 09:34:45 crc kubenswrapper[4786]: I1209 09:34:45.859306 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:45 crc kubenswrapper[4786]: I1209 09:34:45.859709 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:45 crc kubenswrapper[4786]: I1209 09:34:45.910307 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:46 crc kubenswrapper[4786]: I1209 09:34:46.503930 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:46 crc kubenswrapper[4786]: I1209 09:34:46.555853 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4p7dd"] Dec 09 09:34:48 crc kubenswrapper[4786]: I1209 09:34:48.457975 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4p7dd" podUID="a675f046-ad88-4161-a978-f8774140dd10" containerName="registry-server" containerID="cri-o://e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8" gracePeriod=2 Dec 09 09:34:48 crc kubenswrapper[4786]: I1209 09:34:48.927948 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.008701 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-utilities\") pod \"a675f046-ad88-4161-a978-f8774140dd10\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.008800 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz9pk\" (UniqueName: \"kubernetes.io/projected/a675f046-ad88-4161-a978-f8774140dd10-kube-api-access-fz9pk\") pod \"a675f046-ad88-4161-a978-f8774140dd10\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.008897 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-catalog-content\") pod \"a675f046-ad88-4161-a978-f8774140dd10\" (UID: \"a675f046-ad88-4161-a978-f8774140dd10\") " Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.010346 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-utilities" (OuterVolumeSpecName: "utilities") pod "a675f046-ad88-4161-a978-f8774140dd10" (UID: "a675f046-ad88-4161-a978-f8774140dd10"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.011297 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.022048 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a675f046-ad88-4161-a978-f8774140dd10-kube-api-access-fz9pk" (OuterVolumeSpecName: "kube-api-access-fz9pk") pod "a675f046-ad88-4161-a978-f8774140dd10" (UID: "a675f046-ad88-4161-a978-f8774140dd10"). InnerVolumeSpecName "kube-api-access-fz9pk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.076375 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a675f046-ad88-4161-a978-f8774140dd10" (UID: "a675f046-ad88-4161-a978-f8774140dd10"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.115743 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz9pk\" (UniqueName: \"kubernetes.io/projected/a675f046-ad88-4161-a978-f8774140dd10-kube-api-access-fz9pk\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.115810 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a675f046-ad88-4161-a978-f8774140dd10-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.472238 4786 generic.go:334] "Generic (PLEG): container finished" podID="a675f046-ad88-4161-a978-f8774140dd10" containerID="e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8" exitCode=0 Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.472440 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4p7dd" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.472481 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7dd" event={"ID":"a675f046-ad88-4161-a978-f8774140dd10","Type":"ContainerDied","Data":"e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8"} Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.474630 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4p7dd" event={"ID":"a675f046-ad88-4161-a978-f8774140dd10","Type":"ContainerDied","Data":"d42dddb3983abe08cdd7abef7a2d28ec18ea8af4349f244952c21c1d21799566"} Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.474730 4786 scope.go:117] "RemoveContainer" containerID="e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.507029 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4p7dd"] Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.508049 4786 scope.go:117] "RemoveContainer" containerID="e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.522127 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4p7dd"] Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.533228 4786 scope.go:117] "RemoveContainer" containerID="9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.596828 4786 scope.go:117] "RemoveContainer" containerID="e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8" Dec 09 09:34:49 crc kubenswrapper[4786]: E1209 09:34:49.597448 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8\": container with ID starting with e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8 not found: ID does not exist" containerID="e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.597545 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8"} err="failed to get container status \"e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8\": rpc error: code = NotFound desc = could not find container \"e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8\": container with ID starting with e32e5b5ae9b5a34cf25fa02aadaf9ba66bdee41f1780e0e090de157e43a076e8 not found: ID does not exist" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.597625 4786 scope.go:117] "RemoveContainer" containerID="e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c" Dec 09 09:34:49 crc kubenswrapper[4786]: E1209 09:34:49.597985 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c\": container with ID starting with e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c not found: ID does not exist" containerID="e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.598028 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c"} err="failed to get container status \"e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c\": rpc error: code = NotFound desc = could not find container \"e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c\": container with ID starting with e949af2322bcccd64acb785e3153224c281eeaec6e77c78e39a481df8ab7f21c not found: ID does not exist" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.598064 4786 scope.go:117] "RemoveContainer" containerID="9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084" Dec 09 09:34:49 crc kubenswrapper[4786]: E1209 09:34:49.598310 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084\": container with ID starting with 9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084 not found: ID does not exist" containerID="9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084" Dec 09 09:34:49 crc kubenswrapper[4786]: I1209 09:34:49.598493 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084"} err="failed to get container status \"9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084\": rpc error: code = NotFound desc = could not find container \"9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084\": container with ID starting with 9de2734196d3abfaea795353872b2bc039c585fe3670b0be3689f00eefe51084 not found: ID does not exist" Dec 09 09:34:51 crc kubenswrapper[4786]: I1209 09:34:51.198790 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a675f046-ad88-4161-a978-f8774140dd10" path="/var/lib/kubelet/pods/a675f046-ad88-4161-a978-f8774140dd10/volumes" Dec 09 09:34:54 crc kubenswrapper[4786]: I1209 09:34:54.988749 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:34:54 crc kubenswrapper[4786]: I1209 09:34:54.989290 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.958039 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Dec 09 09:35:24 crc kubenswrapper[4786]: E1209 09:35:24.959119 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a675f046-ad88-4161-a978-f8774140dd10" containerName="registry-server" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.959147 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a675f046-ad88-4161-a978-f8774140dd10" containerName="registry-server" Dec 09 09:35:24 crc kubenswrapper[4786]: E1209 09:35:24.959168 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a675f046-ad88-4161-a978-f8774140dd10" containerName="extract-content" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.959176 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a675f046-ad88-4161-a978-f8774140dd10" containerName="extract-content" Dec 09 09:35:24 crc kubenswrapper[4786]: E1209 09:35:24.959219 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1717b33-b022-49ed-94fc-2160247ac3bd" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.959231 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1717b33-b022-49ed-94fc-2160247ac3bd" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 09 09:35:24 crc kubenswrapper[4786]: E1209 09:35:24.959248 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a675f046-ad88-4161-a978-f8774140dd10" containerName="extract-utilities" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.959256 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="a675f046-ad88-4161-a978-f8774140dd10" containerName="extract-utilities" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.959513 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1717b33-b022-49ed-94fc-2160247ac3bd" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.959537 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="a675f046-ad88-4161-a978-f8774140dd10" containerName="registry-server" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.961775 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.987149 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.989552 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.989596 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:35:24 crc kubenswrapper[4786]: I1209 09:35:24.998023 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.025699 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz4s4\" (UniqueName: \"kubernetes.io/projected/e2b22ca8-e985-404e-af49-d7328d2d3017-kube-api-access-fz4s4\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.025791 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-lib-modules\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.025838 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-run\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.025910 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.025932 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.025959 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.025987 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.026011 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-dev\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.026035 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.026061 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.026086 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-sys\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.026114 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-scripts\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.026160 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.026205 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-config-data\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.026251 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.083374 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-0"] Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.089499 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.102628 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.111561 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-config-data" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.127880 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.127928 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz4s4\" (UniqueName: \"kubernetes.io/projected/e2b22ca8-e985-404e-af49-d7328d2d3017-kube-api-access-fz4s4\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.127957 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-lib-modules\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.127986 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-run\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128002 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-sys\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128035 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128071 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128087 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128106 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128125 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128144 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-dev\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128162 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128178 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128195 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128214 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-sys\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128237 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-scripts\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128260 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128282 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128308 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128345 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128360 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128385 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128400 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128416 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128466 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-config-data\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128490 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2sr9\" (UniqueName: \"kubernetes.io/projected/fc7497d0-e84b-4a17-8d33-b63bf384eee8-kube-api-access-x2sr9\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128506 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-dev\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128535 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128555 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128597 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-run\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128948 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-lib-modules\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.128976 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-run\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.132344 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.132810 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.132895 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.133064 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.133158 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-dev\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.133233 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.134294 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-etc-nvme\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.134371 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e2b22ca8-e985-404e-af49-d7328d2d3017-sys\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.139977 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.140071 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-config-data\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.142581 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-scripts\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.149081 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e2b22ca8-e985-404e-af49-d7328d2d3017-config-data-custom\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.152934 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.154717 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.159828 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-nfs-2-config-data" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.174550 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz4s4\" (UniqueName: \"kubernetes.io/projected/e2b22ca8-e985-404e-af49-d7328d2d3017-kube-api-access-fz4s4\") pod \"cinder-backup-0\" (UID: \"e2b22ca8-e985-404e-af49-d7328d2d3017\") " pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.176499 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.233805 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.233864 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.233889 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.233906 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.233925 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.233952 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.233975 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-dev\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.233992 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2sr9\" (UniqueName: \"kubernetes.io/projected/fc7497d0-e84b-4a17-8d33-b63bf384eee8-kube-api-access-x2sr9\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234011 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234028 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234053 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234075 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234091 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234114 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234137 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-run\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234157 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234176 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234218 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234233 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfbhp\" (UniqueName: \"kubernetes.io/projected/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-kube-api-access-zfbhp\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234267 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-sys\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234299 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234319 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234365 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234380 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234406 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234462 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234486 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234502 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234520 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.234537 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.235565 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-var-locks-brick\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.235658 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-var-locks-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.235667 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-etc-nvme\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.235738 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-sys\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.235767 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-etc-iscsi\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.235809 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-lib-modules\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.236103 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-dev\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.236153 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-var-lib-cinder\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.236825 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-run\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.236858 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc7497d0-e84b-4a17-8d33-b63bf384eee8-etc-machine-id\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.249292 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-combined-ca-bundle\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.264993 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-scripts\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.274986 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-config-data-custom\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.276287 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc7497d0-e84b-4a17-8d33-b63bf384eee8-config-data\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.276409 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2sr9\" (UniqueName: \"kubernetes.io/projected/fc7497d0-e84b-4a17-8d33-b63bf384eee8-kube-api-access-x2sr9\") pod \"cinder-volume-nfs-0\" (UID: \"fc7497d0-e84b-4a17-8d33-b63bf384eee8\") " pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.288028 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.336886 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.336946 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.336980 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337012 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337064 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337107 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337143 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337165 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337212 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337235 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337269 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337302 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337354 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337375 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfbhp\" (UniqueName: \"kubernetes.io/projected/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-kube-api-access-zfbhp\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337479 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337643 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-etc-iscsi\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.337712 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-run\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.338176 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-dev\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.338273 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-var-locks-brick\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.338282 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-sys\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.338333 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-var-lib-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.338367 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-etc-nvme\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.338401 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-var-locks-cinder\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.338595 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-lib-modules\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.338743 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-etc-machine-id\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.345285 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-config-data-custom\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.352035 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-config-data\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.356975 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-scripts\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.357292 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-combined-ca-bundle\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.374180 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfbhp\" (UniqueName: \"kubernetes.io/projected/24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df-kube-api-access-zfbhp\") pod \"cinder-volume-nfs-2-0\" (UID: \"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df\") " pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.407848 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.408841 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:25 crc kubenswrapper[4786]: I1209 09:35:25.974040 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.107726 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-2-0"] Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.227721 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-nfs-0"] Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.925567 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"fc7497d0-e84b-4a17-8d33-b63bf384eee8","Type":"ContainerStarted","Data":"d3b724f6ab74c4bfc777a70c0435298bd34c20f265fb587db9d955351d9f1671"} Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.926369 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"fc7497d0-e84b-4a17-8d33-b63bf384eee8","Type":"ContainerStarted","Data":"6c567d7eade1fbd4f63dc9ce8dd73ad32d7b552c91edeb4974ffd329d6329818"} Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.929883 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e2b22ca8-e985-404e-af49-d7328d2d3017","Type":"ContainerStarted","Data":"d4aae59861de783b86736009dc09bbea2c7e10c26e4c1ea14a67f9c2b2ad71b2"} Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.929944 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e2b22ca8-e985-404e-af49-d7328d2d3017","Type":"ContainerStarted","Data":"cd8abce4c53041eaea6df6b8b9505372fba546920c3e6aa728a2d65c0d3187be"} Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.929961 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"e2b22ca8-e985-404e-af49-d7328d2d3017","Type":"ContainerStarted","Data":"d824eeabadbcfa4637c253fb30e83b3fdc75c9ce311f54c6eda8f4255590b942"} Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.939882 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df","Type":"ContainerStarted","Data":"fadcde204e44a3daf73eb99dd7d4541b010d09cadee2dc9f13b9a32344f3211f"} Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.939963 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df","Type":"ContainerStarted","Data":"58a3d3694512cf51597f7c60f43e800b328c8b0e9fa684d8c89619fc28f9776a"} Dec 09 09:35:26 crc kubenswrapper[4786]: I1209 09:35:26.980271 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=2.755604075 podStartE2EDuration="2.980245376s" podCreationTimestamp="2025-12-09 09:35:24 +0000 UTC" firstStartedPulling="2025-12-09 09:35:25.981328615 +0000 UTC m=+3091.864949841" lastFinishedPulling="2025-12-09 09:35:26.205969916 +0000 UTC m=+3092.089591142" observedRunningTime="2025-12-09 09:35:26.956654715 +0000 UTC m=+3092.840275941" watchObservedRunningTime="2025-12-09 09:35:26.980245376 +0000 UTC m=+3092.863866592" Dec 09 09:35:27 crc kubenswrapper[4786]: I1209 09:35:27.959743 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-0" event={"ID":"fc7497d0-e84b-4a17-8d33-b63bf384eee8","Type":"ContainerStarted","Data":"d6ff384b30f55a6dfd3234fff21f332b910d268128dadebe320551d49258d4f6"} Dec 09 09:35:27 crc kubenswrapper[4786]: I1209 09:35:27.963201 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-nfs-2-0" event={"ID":"24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df","Type":"ContainerStarted","Data":"c46e07701df784c1a9bbd229d883c16577a0630e96675cab134fc958c1b887c0"} Dec 09 09:35:28 crc kubenswrapper[4786]: I1209 09:35:28.094312 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-0" podStartSLOduration=2.899018325 podStartE2EDuration="3.094292601s" podCreationTimestamp="2025-12-09 09:35:25 +0000 UTC" firstStartedPulling="2025-12-09 09:35:26.241941132 +0000 UTC m=+3092.125562358" lastFinishedPulling="2025-12-09 09:35:26.437215408 +0000 UTC m=+3092.320836634" observedRunningTime="2025-12-09 09:35:28.067298736 +0000 UTC m=+3093.950919962" watchObservedRunningTime="2025-12-09 09:35:28.094292601 +0000 UTC m=+3093.977913827" Dec 09 09:35:28 crc kubenswrapper[4786]: I1209 09:35:28.101538 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-nfs-2-0" podStartSLOduration=2.8600556040000003 podStartE2EDuration="3.101523749s" podCreationTimestamp="2025-12-09 09:35:25 +0000 UTC" firstStartedPulling="2025-12-09 09:35:26.189695865 +0000 UTC m=+3092.073317081" lastFinishedPulling="2025-12-09 09:35:26.431164 +0000 UTC m=+3092.314785226" observedRunningTime="2025-12-09 09:35:28.089676677 +0000 UTC m=+3093.973297913" watchObservedRunningTime="2025-12-09 09:35:28.101523749 +0000 UTC m=+3093.985144975" Dec 09 09:35:30 crc kubenswrapper[4786]: I1209 09:35:30.288913 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Dec 09 09:35:30 crc kubenswrapper[4786]: I1209 09:35:30.408949 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:30 crc kubenswrapper[4786]: I1209 09:35:30.409038 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:35 crc kubenswrapper[4786]: I1209 09:35:35.450969 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Dec 09 09:35:35 crc kubenswrapper[4786]: I1209 09:35:35.678675 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-2-0" Dec 09 09:35:35 crc kubenswrapper[4786]: I1209 09:35:35.838563 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-nfs-0" Dec 09 09:35:54 crc kubenswrapper[4786]: I1209 09:35:54.989376 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:35:54 crc kubenswrapper[4786]: I1209 09:35:54.990051 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:35:54 crc kubenswrapper[4786]: I1209 09:35:54.990117 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:35:54 crc kubenswrapper[4786]: I1209 09:35:54.991524 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:35:54 crc kubenswrapper[4786]: I1209 09:35:54.991626 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" gracePeriod=600 Dec 09 09:35:55 crc kubenswrapper[4786]: E1209 09:35:55.117696 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:35:55 crc kubenswrapper[4786]: I1209 09:35:55.456137 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" exitCode=0 Dec 09 09:35:55 crc kubenswrapper[4786]: I1209 09:35:55.456208 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef"} Dec 09 09:35:55 crc kubenswrapper[4786]: I1209 09:35:55.456594 4786 scope.go:117] "RemoveContainer" containerID="a1037d8b8d01c499e1b227ea9e68658b3ee1012303ade8df1a89fe76c43ee5ed" Dec 09 09:35:55 crc kubenswrapper[4786]: I1209 09:35:55.457286 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:35:55 crc kubenswrapper[4786]: E1209 09:35:55.457605 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:36:08 crc kubenswrapper[4786]: I1209 09:36:08.189054 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:36:08 crc kubenswrapper[4786]: E1209 09:36:08.190005 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:36:20 crc kubenswrapper[4786]: I1209 09:36:20.188496 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:36:20 crc kubenswrapper[4786]: E1209 09:36:20.189550 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.345872 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.351344 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="config-reloader" containerID="cri-o://a8544eaa538426b59f7946b5b5b08e54a750a510d0ef38038a501f2725e6e6e4" gracePeriod=600 Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.351351 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="thanos-sidecar" containerID="cri-o://130033b0d8d8a0b705bc89a996c4d79473f4fe0f05973269c2cf5447598e005f" gracePeriod=600 Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.351657 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="prometheus" containerID="cri-o://bf539cb1568e0ba52ee652abbafe187288bc90b74658acbf805efbb7e9013167" gracePeriod=600 Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.826603 4786 generic.go:334] "Generic (PLEG): container finished" podID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerID="130033b0d8d8a0b705bc89a996c4d79473f4fe0f05973269c2cf5447598e005f" exitCode=0 Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.826934 4786 generic.go:334] "Generic (PLEG): container finished" podID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerID="a8544eaa538426b59f7946b5b5b08e54a750a510d0ef38038a501f2725e6e6e4" exitCode=0 Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.826944 4786 generic.go:334] "Generic (PLEG): container finished" podID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerID="bf539cb1568e0ba52ee652abbafe187288bc90b74658acbf805efbb7e9013167" exitCode=0 Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.826685 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerDied","Data":"130033b0d8d8a0b705bc89a996c4d79473f4fe0f05973269c2cf5447598e005f"} Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.826985 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerDied","Data":"a8544eaa538426b59f7946b5b5b08e54a750a510d0ef38038a501f2725e6e6e4"} Dec 09 09:36:29 crc kubenswrapper[4786]: I1209 09:36:29.827000 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerDied","Data":"bf539cb1568e0ba52ee652abbafe187288bc90b74658acbf805efbb7e9013167"} Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.298620 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.374972 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-config\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.375033 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-thanos-prometheus-http-client-file\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.375132 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/374a9777-590c-4ba3-afd5-9287abcd72a0-prometheus-metric-storage-rulefiles-0\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.375165 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.375208 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-tls-assets\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.375229 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/374a9777-590c-4ba3-afd5-9287abcd72a0-config-out\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.375245 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pprx5\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-kube-api-access-pprx5\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.375284 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.376058 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.376091 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-secret-combined-ca-bundle\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.376213 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/374a9777-590c-4ba3-afd5-9287abcd72a0-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.379640 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"374a9777-590c-4ba3-afd5-9287abcd72a0\" (UID: \"374a9777-590c-4ba3-afd5-9287abcd72a0\") " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.380887 4786 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/374a9777-590c-4ba3-afd5-9287abcd72a0-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.425988 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-config" (OuterVolumeSpecName: "config") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.427451 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.427663 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/374a9777-590c-4ba3-afd5-9287abcd72a0-config-out" (OuterVolumeSpecName: "config-out") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.429603 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.429840 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.431576 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.448842 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.448990 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-kube-api-access-pprx5" (OuterVolumeSpecName: "kube-api-access-pprx5") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "kube-api-access-pprx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.486456 4786 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.486516 4786 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/374a9777-590c-4ba3-afd5-9287abcd72a0-config-out\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.486529 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pprx5\" (UniqueName: \"kubernetes.io/projected/374a9777-590c-4ba3-afd5-9287abcd72a0-kube-api-access-pprx5\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.486547 4786 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.486563 4786 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.486577 4786 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.486588 4786 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.486599 4786 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.622410 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "pvc-28b31f81-4b46-4930-847d-98cf3cf77e89". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.647866 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config" (OuterVolumeSpecName: "web-config") pod "374a9777-590c-4ba3-afd5-9287abcd72a0" (UID: "374a9777-590c-4ba3-afd5-9287abcd72a0"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.692537 4786 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") on node \"crc\" " Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.692578 4786 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/374a9777-590c-4ba3-afd5-9287abcd72a0-web-config\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.720779 4786 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.721822 4786 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-28b31f81-4b46-4930-847d-98cf3cf77e89" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89") on node "crc" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.794288 4786 reconciler_common.go:293] "Volume detached for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") on node \"crc\" DevicePath \"\"" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.843096 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"374a9777-590c-4ba3-afd5-9287abcd72a0","Type":"ContainerDied","Data":"5a64d1b61d664e2c57b77efa93f64c49d750de3503c6faf75c75d7bb480e4f96"} Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.843456 4786 scope.go:117] "RemoveContainer" containerID="130033b0d8d8a0b705bc89a996c4d79473f4fe0f05973269c2cf5447598e005f" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.843165 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.890930 4786 scope.go:117] "RemoveContainer" containerID="a8544eaa538426b59f7946b5b5b08e54a750a510d0ef38038a501f2725e6e6e4" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.891722 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.903833 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.916333 4786 scope.go:117] "RemoveContainer" containerID="bf539cb1568e0ba52ee652abbafe187288bc90b74658acbf805efbb7e9013167" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.923368 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:36:30 crc kubenswrapper[4786]: E1209 09:36:30.924919 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="init-config-reloader" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.924951 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="init-config-reloader" Dec 09 09:36:30 crc kubenswrapper[4786]: E1209 09:36:30.924977 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="thanos-sidecar" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.924983 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="thanos-sidecar" Dec 09 09:36:30 crc kubenswrapper[4786]: E1209 09:36:30.924993 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="config-reloader" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.925000 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="config-reloader" Dec 09 09:36:30 crc kubenswrapper[4786]: E1209 09:36:30.925013 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="prometheus" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.925018 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="prometheus" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.925239 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="thanos-sidecar" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.925253 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="config-reloader" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.925265 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" containerName="prometheus" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.927484 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.934013 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.934273 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.934638 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.936914 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-7szbp" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.937618 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.946964 4786 scope.go:117] "RemoveContainer" containerID="4674188bcc016e82af59b634d302e4ab1ff996e7550beb31e346de4712fb78f0" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.953159 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 09 09:36:30 crc kubenswrapper[4786]: I1209 09:36:30.969188 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100193 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100261 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-config\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100447 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100539 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0fec9fc6-5660-4127-95a1-63f6abee883e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100588 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100672 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100739 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100803 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0fec9fc6-5660-4127-95a1-63f6abee883e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100849 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0fec9fc6-5660-4127-95a1-63f6abee883e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100879 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqw77\" (UniqueName: \"kubernetes.io/projected/0fec9fc6-5660-4127-95a1-63f6abee883e-kube-api-access-rqw77\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.100918 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.201795 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="374a9777-590c-4ba3-afd5-9287abcd72a0" path="/var/lib/kubelet/pods/374a9777-590c-4ba3-afd5-9287abcd72a0/volumes" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202253 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202340 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0fec9fc6-5660-4127-95a1-63f6abee883e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202369 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202450 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202488 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202539 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0fec9fc6-5660-4127-95a1-63f6abee883e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202578 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0fec9fc6-5660-4127-95a1-63f6abee883e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202604 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqw77\" (UniqueName: \"kubernetes.io/projected/0fec9fc6-5660-4127-95a1-63f6abee883e-kube-api-access-rqw77\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202633 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202664 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.202686 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-config\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.203412 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/0fec9fc6-5660-4127-95a1-63f6abee883e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.207162 4786 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.207204 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/8cfbd567b829700af2d6ada5c2de407e7db840737c905f740b19ad0b115df38c/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.208184 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/0fec9fc6-5660-4127-95a1-63f6abee883e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.208555 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.208938 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.210746 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-config\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.211410 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.213191 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.213236 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/0fec9fc6-5660-4127-95a1-63f6abee883e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.214584 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0fec9fc6-5660-4127-95a1-63f6abee883e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.223168 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqw77\" (UniqueName: \"kubernetes.io/projected/0fec9fc6-5660-4127-95a1-63f6abee883e-kube-api-access-rqw77\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.255949 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-28b31f81-4b46-4930-847d-98cf3cf77e89\") pod \"prometheus-metric-storage-0\" (UID: \"0fec9fc6-5660-4127-95a1-63f6abee883e\") " pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:31 crc kubenswrapper[4786]: I1209 09:36:31.547518 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 09 09:36:32 crc kubenswrapper[4786]: I1209 09:36:32.005522 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 09 09:36:32 crc kubenswrapper[4786]: I1209 09:36:32.861698 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0fec9fc6-5660-4127-95a1-63f6abee883e","Type":"ContainerStarted","Data":"1b87bde0797cc760a1e7d6eef0f3f5458326aaa635230e841cb09fe627be23c6"} Dec 09 09:36:35 crc kubenswrapper[4786]: I1209 09:36:35.199068 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:36:35 crc kubenswrapper[4786]: E1209 09:36:35.199997 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:36:35 crc kubenswrapper[4786]: I1209 09:36:35.905456 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0fec9fc6-5660-4127-95a1-63f6abee883e","Type":"ContainerStarted","Data":"b0105f35cefed44189b4d3f7c4216627fe7bb9ae003a34196967afac9dd8f97a"} Dec 09 09:36:43 crc kubenswrapper[4786]: I1209 09:36:43.994491 4786 generic.go:334] "Generic (PLEG): container finished" podID="0fec9fc6-5660-4127-95a1-63f6abee883e" containerID="b0105f35cefed44189b4d3f7c4216627fe7bb9ae003a34196967afac9dd8f97a" exitCode=0 Dec 09 09:36:43 crc kubenswrapper[4786]: I1209 09:36:43.994554 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0fec9fc6-5660-4127-95a1-63f6abee883e","Type":"ContainerDied","Data":"b0105f35cefed44189b4d3f7c4216627fe7bb9ae003a34196967afac9dd8f97a"} Dec 09 09:36:45 crc kubenswrapper[4786]: I1209 09:36:45.025328 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0fec9fc6-5660-4127-95a1-63f6abee883e","Type":"ContainerStarted","Data":"ea682cd03c05bffc2e36f23e7192246a60175031829cdd746cbf9b840e56ef6d"} Dec 09 09:36:48 crc kubenswrapper[4786]: I1209 09:36:48.187779 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:36:48 crc kubenswrapper[4786]: E1209 09:36:48.188609 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:36:50 crc kubenswrapper[4786]: I1209 09:36:50.083980 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0fec9fc6-5660-4127-95a1-63f6abee883e","Type":"ContainerStarted","Data":"d222152e3b697208f3e00cd6031b8ed4710951e93c92f432840690fd2b7f79a0"} Dec 09 09:36:50 crc kubenswrapper[4786]: I1209 09:36:50.084490 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"0fec9fc6-5660-4127-95a1-63f6abee883e","Type":"ContainerStarted","Data":"3e5cd25908d4e113c4e48d9f8a62936226acaa3bfe239ffc019bf527d23de6ce"} Dec 09 09:36:50 crc kubenswrapper[4786]: I1209 09:36:50.133756 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.133737498 podStartE2EDuration="20.133737498s" podCreationTimestamp="2025-12-09 09:36:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 09:36:50.119962189 +0000 UTC m=+3176.003583415" watchObservedRunningTime="2025-12-09 09:36:50.133737498 +0000 UTC m=+3176.017358724" Dec 09 09:36:51 crc kubenswrapper[4786]: I1209 09:36:51.547824 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 09 09:37:01 crc kubenswrapper[4786]: I1209 09:37:01.188867 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:37:01 crc kubenswrapper[4786]: E1209 09:37:01.191603 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:37:01 crc kubenswrapper[4786]: I1209 09:37:01.548340 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 09 09:37:01 crc kubenswrapper[4786]: I1209 09:37:01.555099 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 09 09:37:02 crc kubenswrapper[4786]: I1209 09:37:02.226216 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.215726 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.218081 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.224038 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.224037 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.224202 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dpx84" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.224276 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.232067 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.277653 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdk5s\" (UniqueName: \"kubernetes.io/projected/0112bf44-5116-4b72-a860-4fc091e5dc27-kube-api-access-kdk5s\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.277848 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.277955 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.278033 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-config-data\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.278412 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.278520 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.278669 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.278977 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.279043 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.380663 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.380946 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.381038 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.381061 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.381095 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdk5s\" (UniqueName: \"kubernetes.io/projected/0112bf44-5116-4b72-a860-4fc091e5dc27-kube-api-access-kdk5s\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.381121 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.381146 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.381175 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-config-data\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.381273 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.381478 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.382175 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.382196 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.382536 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.383369 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-config-data\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.389078 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.389373 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.390996 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.405473 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdk5s\" (UniqueName: \"kubernetes.io/projected/0112bf44-5116-4b72-a860-4fc091e5dc27-kube-api-access-kdk5s\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.415101 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"tempest-tests-tempest\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " pod="openstack/tempest-tests-tempest" Dec 09 09:37:07 crc kubenswrapper[4786]: I1209 09:37:07.538540 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 09 09:37:08 crc kubenswrapper[4786]: I1209 09:37:08.017901 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 09 09:37:08 crc kubenswrapper[4786]: I1209 09:37:08.283391 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0112bf44-5116-4b72-a860-4fc091e5dc27","Type":"ContainerStarted","Data":"405f24b6fa6f393581784144a41e2e2267a00b7327299ece5627023ab61c12d3"} Dec 09 09:37:15 crc kubenswrapper[4786]: I1209 09:37:15.198687 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:37:15 crc kubenswrapper[4786]: E1209 09:37:15.200275 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:37:19 crc kubenswrapper[4786]: I1209 09:37:19.411785 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0112bf44-5116-4b72-a860-4fc091e5dc27","Type":"ContainerStarted","Data":"ffef5b08f2f2912c6df065eb6958d26c86bf33406b2579f3912d5138d4c81264"} Dec 09 09:37:19 crc kubenswrapper[4786]: I1209 09:37:19.434528 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.826670439 podStartE2EDuration="13.434509874s" podCreationTimestamp="2025-12-09 09:37:06 +0000 UTC" firstStartedPulling="2025-12-09 09:37:08.022405524 +0000 UTC m=+3193.906026750" lastFinishedPulling="2025-12-09 09:37:17.630244959 +0000 UTC m=+3203.513866185" observedRunningTime="2025-12-09 09:37:19.433851739 +0000 UTC m=+3205.317472995" watchObservedRunningTime="2025-12-09 09:37:19.434509874 +0000 UTC m=+3205.318131100" Dec 09 09:37:29 crc kubenswrapper[4786]: I1209 09:37:29.188932 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:37:29 crc kubenswrapper[4786]: E1209 09:37:29.189865 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:37:43 crc kubenswrapper[4786]: I1209 09:37:43.188560 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:37:43 crc kubenswrapper[4786]: E1209 09:37:43.189399 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:37:54 crc kubenswrapper[4786]: I1209 09:37:54.188793 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:37:54 crc kubenswrapper[4786]: E1209 09:37:54.191557 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:38:09 crc kubenswrapper[4786]: I1209 09:38:09.188696 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:38:09 crc kubenswrapper[4786]: E1209 09:38:09.189587 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:38:22 crc kubenswrapper[4786]: I1209 09:38:22.187676 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:38:22 crc kubenswrapper[4786]: E1209 09:38:22.188571 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:38:33 crc kubenswrapper[4786]: I1209 09:38:33.189275 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:38:33 crc kubenswrapper[4786]: E1209 09:38:33.190284 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:38:45 crc kubenswrapper[4786]: I1209 09:38:45.194725 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:38:45 crc kubenswrapper[4786]: E1209 09:38:45.195521 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:38:58 crc kubenswrapper[4786]: I1209 09:38:58.188527 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:38:58 crc kubenswrapper[4786]: E1209 09:38:58.189489 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:39:11 crc kubenswrapper[4786]: I1209 09:39:11.189055 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:39:11 crc kubenswrapper[4786]: E1209 09:39:11.190106 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:39:22 crc kubenswrapper[4786]: I1209 09:39:22.188850 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:39:22 crc kubenswrapper[4786]: E1209 09:39:22.189919 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.305080 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wmh2j"] Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.310381 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.317682 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmh2j"] Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.424640 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-catalog-content\") pod \"redhat-operators-wmh2j\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.424739 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x6cg\" (UniqueName: \"kubernetes.io/projected/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-kube-api-access-2x6cg\") pod \"redhat-operators-wmh2j\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.426098 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-utilities\") pod \"redhat-operators-wmh2j\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.528353 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-catalog-content\") pod \"redhat-operators-wmh2j\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.528399 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x6cg\" (UniqueName: \"kubernetes.io/projected/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-kube-api-access-2x6cg\") pod \"redhat-operators-wmh2j\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.528464 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-utilities\") pod \"redhat-operators-wmh2j\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.528871 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-catalog-content\") pod \"redhat-operators-wmh2j\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.528888 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-utilities\") pod \"redhat-operators-wmh2j\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.567447 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x6cg\" (UniqueName: \"kubernetes.io/projected/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-kube-api-access-2x6cg\") pod \"redhat-operators-wmh2j\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:26 crc kubenswrapper[4786]: I1209 09:39:26.641162 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:27 crc kubenswrapper[4786]: I1209 09:39:27.348844 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wmh2j"] Dec 09 09:39:27 crc kubenswrapper[4786]: I1209 09:39:27.855129 4786 generic.go:334] "Generic (PLEG): container finished" podID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerID="6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf" exitCode=0 Dec 09 09:39:27 crc kubenswrapper[4786]: I1209 09:39:27.965332 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmh2j" event={"ID":"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3","Type":"ContainerDied","Data":"6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf"} Dec 09 09:39:27 crc kubenswrapper[4786]: I1209 09:39:27.965526 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmh2j" event={"ID":"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3","Type":"ContainerStarted","Data":"59a6e6233b10b60f14d2104a932b990980649c681a95a0b64329b64f07feb803"} Dec 09 09:39:29 crc kubenswrapper[4786]: I1209 09:39:29.895329 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmh2j" event={"ID":"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3","Type":"ContainerStarted","Data":"c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4"} Dec 09 09:39:32 crc kubenswrapper[4786]: I1209 09:39:32.923597 4786 generic.go:334] "Generic (PLEG): container finished" podID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerID="c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4" exitCode=0 Dec 09 09:39:32 crc kubenswrapper[4786]: I1209 09:39:32.923679 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmh2j" event={"ID":"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3","Type":"ContainerDied","Data":"c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4"} Dec 09 09:39:33 crc kubenswrapper[4786]: I1209 09:39:33.188629 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:39:33 crc kubenswrapper[4786]: E1209 09:39:33.188945 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:39:33 crc kubenswrapper[4786]: I1209 09:39:33.942240 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmh2j" event={"ID":"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3","Type":"ContainerStarted","Data":"9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70"} Dec 09 09:39:33 crc kubenswrapper[4786]: I1209 09:39:33.964939 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wmh2j" podStartSLOduration=2.571255479 podStartE2EDuration="7.964920761s" podCreationTimestamp="2025-12-09 09:39:26 +0000 UTC" firstStartedPulling="2025-12-09 09:39:27.970300252 +0000 UTC m=+3333.853921478" lastFinishedPulling="2025-12-09 09:39:33.363965534 +0000 UTC m=+3339.247586760" observedRunningTime="2025-12-09 09:39:33.960743985 +0000 UTC m=+3339.844365221" watchObservedRunningTime="2025-12-09 09:39:33.964920761 +0000 UTC m=+3339.848541987" Dec 09 09:39:36 crc kubenswrapper[4786]: I1209 09:39:36.641275 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:36 crc kubenswrapper[4786]: I1209 09:39:36.641828 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:37 crc kubenswrapper[4786]: I1209 09:39:37.730085 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wmh2j" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerName="registry-server" probeResult="failure" output=< Dec 09 09:39:37 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 09:39:37 crc kubenswrapper[4786]: > Dec 09 09:39:44 crc kubenswrapper[4786]: I1209 09:39:44.188130 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:39:44 crc kubenswrapper[4786]: E1209 09:39:44.188831 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:39:46 crc kubenswrapper[4786]: I1209 09:39:46.697127 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:46 crc kubenswrapper[4786]: I1209 09:39:46.750379 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:50 crc kubenswrapper[4786]: I1209 09:39:50.498668 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wmh2j"] Dec 09 09:39:50 crc kubenswrapper[4786]: I1209 09:39:50.499505 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wmh2j" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerName="registry-server" containerID="cri-o://9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70" gracePeriod=2 Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.060052 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.082093 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2x6cg\" (UniqueName: \"kubernetes.io/projected/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-kube-api-access-2x6cg\") pod \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.082201 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-utilities\") pod \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.082284 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-catalog-content\") pod \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\" (UID: \"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3\") " Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.084239 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-utilities" (OuterVolumeSpecName: "utilities") pod "cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" (UID: "cbb1f437-fdd1-4dcc-874c-afecdb90f5d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.089499 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-kube-api-access-2x6cg" (OuterVolumeSpecName: "kube-api-access-2x6cg") pod "cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" (UID: "cbb1f437-fdd1-4dcc-874c-afecdb90f5d3"). InnerVolumeSpecName "kube-api-access-2x6cg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.144813 4786 generic.go:334] "Generic (PLEG): container finished" podID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerID="9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70" exitCode=0 Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.144869 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmh2j" event={"ID":"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3","Type":"ContainerDied","Data":"9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70"} Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.144918 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wmh2j" event={"ID":"cbb1f437-fdd1-4dcc-874c-afecdb90f5d3","Type":"ContainerDied","Data":"59a6e6233b10b60f14d2104a932b990980649c681a95a0b64329b64f07feb803"} Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.144939 4786 scope.go:117] "RemoveContainer" containerID="9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.144942 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wmh2j" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.171212 4786 scope.go:117] "RemoveContainer" containerID="c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.185509 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2x6cg\" (UniqueName: \"kubernetes.io/projected/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-kube-api-access-2x6cg\") on node \"crc\" DevicePath \"\"" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.185543 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.218315 4786 scope.go:117] "RemoveContainer" containerID="6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.218449 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" (UID: "cbb1f437-fdd1-4dcc-874c-afecdb90f5d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.261970 4786 scope.go:117] "RemoveContainer" containerID="9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70" Dec 09 09:39:51 crc kubenswrapper[4786]: E1209 09:39:51.263041 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70\": container with ID starting with 9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70 not found: ID does not exist" containerID="9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.263110 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70"} err="failed to get container status \"9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70\": rpc error: code = NotFound desc = could not find container \"9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70\": container with ID starting with 9423714bab0587ee8c00af82ce50de9ff26fa8052350dc0c6c8028cd68fafc70 not found: ID does not exist" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.263160 4786 scope.go:117] "RemoveContainer" containerID="c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4" Dec 09 09:39:51 crc kubenswrapper[4786]: E1209 09:39:51.264025 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4\": container with ID starting with c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4 not found: ID does not exist" containerID="c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.264088 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4"} err="failed to get container status \"c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4\": rpc error: code = NotFound desc = could not find container \"c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4\": container with ID starting with c864d899f886c0ec92dd66ecdcd20ae196ba1d7140c201a840faefd946c7a3c4 not found: ID does not exist" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.264124 4786 scope.go:117] "RemoveContainer" containerID="6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf" Dec 09 09:39:51 crc kubenswrapper[4786]: E1209 09:39:51.264751 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf\": container with ID starting with 6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf not found: ID does not exist" containerID="6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.264796 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf"} err="failed to get container status \"6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf\": rpc error: code = NotFound desc = could not find container \"6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf\": container with ID starting with 6b9ac1f532f88ac40d23b297d5c34a67c30fbd4d8f25b4791b364b83187879cf not found: ID does not exist" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.287498 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.476590 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wmh2j"] Dec 09 09:39:51 crc kubenswrapper[4786]: I1209 09:39:51.487473 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wmh2j"] Dec 09 09:39:53 crc kubenswrapper[4786]: I1209 09:39:53.201161 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" path="/var/lib/kubelet/pods/cbb1f437-fdd1-4dcc-874c-afecdb90f5d3/volumes" Dec 09 09:39:57 crc kubenswrapper[4786]: I1209 09:39:57.189273 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:39:57 crc kubenswrapper[4786]: E1209 09:39:57.190580 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:40:09 crc kubenswrapper[4786]: I1209 09:40:09.188762 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:40:09 crc kubenswrapper[4786]: E1209 09:40:09.189557 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:40:23 crc kubenswrapper[4786]: I1209 09:40:23.189699 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:40:23 crc kubenswrapper[4786]: E1209 09:40:23.190829 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:40:38 crc kubenswrapper[4786]: I1209 09:40:38.188653 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:40:38 crc kubenswrapper[4786]: E1209 09:40:38.189682 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.188979 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:40:52 crc kubenswrapper[4786]: E1209 09:40:52.190061 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.373802 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4knfw"] Dec 09 09:40:52 crc kubenswrapper[4786]: E1209 09:40:52.374284 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerName="extract-content" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.374310 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerName="extract-content" Dec 09 09:40:52 crc kubenswrapper[4786]: E1209 09:40:52.374323 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerName="extract-utilities" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.374331 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerName="extract-utilities" Dec 09 09:40:52 crc kubenswrapper[4786]: E1209 09:40:52.374368 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerName="registry-server" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.374377 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerName="registry-server" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.374697 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb1f437-fdd1-4dcc-874c-afecdb90f5d3" containerName="registry-server" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.376535 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.474836 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4knfw"] Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.570014 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-catalog-content\") pod \"redhat-marketplace-4knfw\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.570096 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-utilities\") pod \"redhat-marketplace-4knfw\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.570149 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4689\" (UniqueName: \"kubernetes.io/projected/ebf09cff-f624-4310-ba12-1ee076850850-kube-api-access-v4689\") pod \"redhat-marketplace-4knfw\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.672125 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-catalog-content\") pod \"redhat-marketplace-4knfw\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.672541 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-utilities\") pod \"redhat-marketplace-4knfw\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.672603 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4689\" (UniqueName: \"kubernetes.io/projected/ebf09cff-f624-4310-ba12-1ee076850850-kube-api-access-v4689\") pod \"redhat-marketplace-4knfw\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.673446 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-catalog-content\") pod \"redhat-marketplace-4knfw\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.673700 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-utilities\") pod \"redhat-marketplace-4knfw\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.734306 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4689\" (UniqueName: \"kubernetes.io/projected/ebf09cff-f624-4310-ba12-1ee076850850-kube-api-access-v4689\") pod \"redhat-marketplace-4knfw\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:52 crc kubenswrapper[4786]: I1209 09:40:52.785129 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:40:53 crc kubenswrapper[4786]: I1209 09:40:53.425477 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4knfw"] Dec 09 09:40:53 crc kubenswrapper[4786]: I1209 09:40:53.889510 4786 generic.go:334] "Generic (PLEG): container finished" podID="ebf09cff-f624-4310-ba12-1ee076850850" containerID="16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e" exitCode=0 Dec 09 09:40:53 crc kubenswrapper[4786]: I1209 09:40:53.889777 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4knfw" event={"ID":"ebf09cff-f624-4310-ba12-1ee076850850","Type":"ContainerDied","Data":"16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e"} Dec 09 09:40:53 crc kubenswrapper[4786]: I1209 09:40:53.889804 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4knfw" event={"ID":"ebf09cff-f624-4310-ba12-1ee076850850","Type":"ContainerStarted","Data":"c03bccc095646a1b99969c5beb14c96006d014aa1acfcbcd27ea9d7b9ce27e2b"} Dec 09 09:40:53 crc kubenswrapper[4786]: I1209 09:40:53.891965 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:40:54 crc kubenswrapper[4786]: I1209 09:40:54.920812 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4knfw" event={"ID":"ebf09cff-f624-4310-ba12-1ee076850850","Type":"ContainerStarted","Data":"472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3"} Dec 09 09:40:55 crc kubenswrapper[4786]: I1209 09:40:55.933570 4786 generic.go:334] "Generic (PLEG): container finished" podID="ebf09cff-f624-4310-ba12-1ee076850850" containerID="472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3" exitCode=0 Dec 09 09:40:55 crc kubenswrapper[4786]: I1209 09:40:55.933695 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4knfw" event={"ID":"ebf09cff-f624-4310-ba12-1ee076850850","Type":"ContainerDied","Data":"472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3"} Dec 09 09:40:56 crc kubenswrapper[4786]: I1209 09:40:56.945051 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4knfw" event={"ID":"ebf09cff-f624-4310-ba12-1ee076850850","Type":"ContainerStarted","Data":"862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec"} Dec 09 09:40:56 crc kubenswrapper[4786]: I1209 09:40:56.964283 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4knfw" podStartSLOduration=2.54726192 podStartE2EDuration="4.964260479s" podCreationTimestamp="2025-12-09 09:40:52 +0000 UTC" firstStartedPulling="2025-12-09 09:40:53.891661103 +0000 UTC m=+3419.775282349" lastFinishedPulling="2025-12-09 09:40:56.308659682 +0000 UTC m=+3422.192280908" observedRunningTime="2025-12-09 09:40:56.962700113 +0000 UTC m=+3422.846321359" watchObservedRunningTime="2025-12-09 09:40:56.964260479 +0000 UTC m=+3422.847881705" Dec 09 09:41:02 crc kubenswrapper[4786]: I1209 09:41:02.785492 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:41:02 crc kubenswrapper[4786]: I1209 09:41:02.785843 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:41:02 crc kubenswrapper[4786]: I1209 09:41:02.839396 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:41:03 crc kubenswrapper[4786]: I1209 09:41:03.160069 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:41:03 crc kubenswrapper[4786]: I1209 09:41:03.231190 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4knfw"] Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.091982 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4knfw" podUID="ebf09cff-f624-4310-ba12-1ee076850850" containerName="registry-server" containerID="cri-o://862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec" gracePeriod=2 Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.585474 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.638186 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-catalog-content\") pod \"ebf09cff-f624-4310-ba12-1ee076850850\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.638287 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-utilities\") pod \"ebf09cff-f624-4310-ba12-1ee076850850\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.638520 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4689\" (UniqueName: \"kubernetes.io/projected/ebf09cff-f624-4310-ba12-1ee076850850-kube-api-access-v4689\") pod \"ebf09cff-f624-4310-ba12-1ee076850850\" (UID: \"ebf09cff-f624-4310-ba12-1ee076850850\") " Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.639711 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-utilities" (OuterVolumeSpecName: "utilities") pod "ebf09cff-f624-4310-ba12-1ee076850850" (UID: "ebf09cff-f624-4310-ba12-1ee076850850"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.650829 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf09cff-f624-4310-ba12-1ee076850850-kube-api-access-v4689" (OuterVolumeSpecName: "kube-api-access-v4689") pod "ebf09cff-f624-4310-ba12-1ee076850850" (UID: "ebf09cff-f624-4310-ba12-1ee076850850"). InnerVolumeSpecName "kube-api-access-v4689". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.657691 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ebf09cff-f624-4310-ba12-1ee076850850" (UID: "ebf09cff-f624-4310-ba12-1ee076850850"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.740796 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4689\" (UniqueName: \"kubernetes.io/projected/ebf09cff-f624-4310-ba12-1ee076850850-kube-api-access-v4689\") on node \"crc\" DevicePath \"\"" Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.740837 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:41:05 crc kubenswrapper[4786]: I1209 09:41:05.740848 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ebf09cff-f624-4310-ba12-1ee076850850-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.102623 4786 generic.go:334] "Generic (PLEG): container finished" podID="ebf09cff-f624-4310-ba12-1ee076850850" containerID="862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec" exitCode=0 Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.102682 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4knfw" event={"ID":"ebf09cff-f624-4310-ba12-1ee076850850","Type":"ContainerDied","Data":"862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec"} Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.103001 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4knfw" event={"ID":"ebf09cff-f624-4310-ba12-1ee076850850","Type":"ContainerDied","Data":"c03bccc095646a1b99969c5beb14c96006d014aa1acfcbcd27ea9d7b9ce27e2b"} Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.103022 4786 scope.go:117] "RemoveContainer" containerID="862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.102695 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4knfw" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.129697 4786 scope.go:117] "RemoveContainer" containerID="472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.138412 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4knfw"] Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.150918 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4knfw"] Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.165146 4786 scope.go:117] "RemoveContainer" containerID="16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.204951 4786 scope.go:117] "RemoveContainer" containerID="862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec" Dec 09 09:41:06 crc kubenswrapper[4786]: E1209 09:41:06.205839 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec\": container with ID starting with 862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec not found: ID does not exist" containerID="862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.205895 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec"} err="failed to get container status \"862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec\": rpc error: code = NotFound desc = could not find container \"862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec\": container with ID starting with 862a95f1eb19d1d068eaaf2a62153d2498c04593d2d20d6d83b2dfdc145955ec not found: ID does not exist" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.205932 4786 scope.go:117] "RemoveContainer" containerID="472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3" Dec 09 09:41:06 crc kubenswrapper[4786]: E1209 09:41:06.206497 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3\": container with ID starting with 472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3 not found: ID does not exist" containerID="472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.206606 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3"} err="failed to get container status \"472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3\": rpc error: code = NotFound desc = could not find container \"472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3\": container with ID starting with 472fdff844756e0ee3c27abc44a843586424b5244b2a3eccab68c1e35eeb22d3 not found: ID does not exist" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.206681 4786 scope.go:117] "RemoveContainer" containerID="16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e" Dec 09 09:41:06 crc kubenswrapper[4786]: E1209 09:41:06.207207 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e\": container with ID starting with 16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e not found: ID does not exist" containerID="16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e" Dec 09 09:41:06 crc kubenswrapper[4786]: I1209 09:41:06.207252 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e"} err="failed to get container status \"16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e\": rpc error: code = NotFound desc = could not find container \"16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e\": container with ID starting with 16327e9bd8fb750a136e84aac8d36531d13249ece599a26173a3fc0923459c7e not found: ID does not exist" Dec 09 09:41:07 crc kubenswrapper[4786]: I1209 09:41:07.191549 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:41:07 crc kubenswrapper[4786]: I1209 09:41:07.219759 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebf09cff-f624-4310-ba12-1ee076850850" path="/var/lib/kubelet/pods/ebf09cff-f624-4310-ba12-1ee076850850/volumes" Dec 09 09:41:08 crc kubenswrapper[4786]: I1209 09:41:08.123559 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"a2f7ae2df4c18e1c640c471c7d33939b3ee78e8c89c09d9ea3fdd81d174a86b0"} Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.639443 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-flf9t"] Dec 09 09:41:37 crc kubenswrapper[4786]: E1209 09:41:37.640505 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf09cff-f624-4310-ba12-1ee076850850" containerName="extract-utilities" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.640519 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf09cff-f624-4310-ba12-1ee076850850" containerName="extract-utilities" Dec 09 09:41:37 crc kubenswrapper[4786]: E1209 09:41:37.640533 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf09cff-f624-4310-ba12-1ee076850850" containerName="extract-content" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.640539 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf09cff-f624-4310-ba12-1ee076850850" containerName="extract-content" Dec 09 09:41:37 crc kubenswrapper[4786]: E1209 09:41:37.640547 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf09cff-f624-4310-ba12-1ee076850850" containerName="registry-server" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.640553 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf09cff-f624-4310-ba12-1ee076850850" containerName="registry-server" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.641574 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf09cff-f624-4310-ba12-1ee076850850" containerName="registry-server" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.643916 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.663404 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-flf9t"] Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.753150 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg8s6\" (UniqueName: \"kubernetes.io/projected/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-kube-api-access-xg8s6\") pod \"community-operators-flf9t\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.753682 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-catalog-content\") pod \"community-operators-flf9t\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.753819 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-utilities\") pod \"community-operators-flf9t\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.858509 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xg8s6\" (UniqueName: \"kubernetes.io/projected/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-kube-api-access-xg8s6\") pod \"community-operators-flf9t\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.858644 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-catalog-content\") pod \"community-operators-flf9t\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.858735 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-utilities\") pod \"community-operators-flf9t\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.859335 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-utilities\") pod \"community-operators-flf9t\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.859924 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-catalog-content\") pod \"community-operators-flf9t\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.895639 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xg8s6\" (UniqueName: \"kubernetes.io/projected/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-kube-api-access-xg8s6\") pod \"community-operators-flf9t\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:37 crc kubenswrapper[4786]: I1209 09:41:37.983577 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:38 crc kubenswrapper[4786]: I1209 09:41:38.502878 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-flf9t"] Dec 09 09:41:39 crc kubenswrapper[4786]: I1209 09:41:39.463169 4786 generic.go:334] "Generic (PLEG): container finished" podID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerID="5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160" exitCode=0 Dec 09 09:41:39 crc kubenswrapper[4786]: I1209 09:41:39.463230 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flf9t" event={"ID":"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e","Type":"ContainerDied","Data":"5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160"} Dec 09 09:41:39 crc kubenswrapper[4786]: I1209 09:41:39.463490 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flf9t" event={"ID":"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e","Type":"ContainerStarted","Data":"4fc95c4c0742d7472faf52812cf9769546e366b6a6fb821d6445518b33202985"} Dec 09 09:41:40 crc kubenswrapper[4786]: I1209 09:41:40.477617 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flf9t" event={"ID":"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e","Type":"ContainerStarted","Data":"56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b"} Dec 09 09:41:42 crc kubenswrapper[4786]: I1209 09:41:42.505166 4786 generic.go:334] "Generic (PLEG): container finished" podID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerID="56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b" exitCode=0 Dec 09 09:41:42 crc kubenswrapper[4786]: I1209 09:41:42.505273 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flf9t" event={"ID":"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e","Type":"ContainerDied","Data":"56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b"} Dec 09 09:41:43 crc kubenswrapper[4786]: I1209 09:41:43.521182 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flf9t" event={"ID":"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e","Type":"ContainerStarted","Data":"d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4"} Dec 09 09:41:43 crc kubenswrapper[4786]: I1209 09:41:43.557452 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-flf9t" podStartSLOduration=3.10283232 podStartE2EDuration="6.557410813s" podCreationTimestamp="2025-12-09 09:41:37 +0000 UTC" firstStartedPulling="2025-12-09 09:41:39.465986832 +0000 UTC m=+3465.349608048" lastFinishedPulling="2025-12-09 09:41:42.920565315 +0000 UTC m=+3468.804186541" observedRunningTime="2025-12-09 09:41:43.53851449 +0000 UTC m=+3469.422135716" watchObservedRunningTime="2025-12-09 09:41:43.557410813 +0000 UTC m=+3469.441032039" Dec 09 09:41:47 crc kubenswrapper[4786]: I1209 09:41:47.984382 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:47 crc kubenswrapper[4786]: I1209 09:41:47.985326 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:48 crc kubenswrapper[4786]: I1209 09:41:48.046015 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:48 crc kubenswrapper[4786]: I1209 09:41:48.649367 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:48 crc kubenswrapper[4786]: I1209 09:41:48.725350 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-flf9t"] Dec 09 09:41:50 crc kubenswrapper[4786]: I1209 09:41:50.599456 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-flf9t" podUID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerName="registry-server" containerID="cri-o://d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4" gracePeriod=2 Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.047242 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.093146 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-catalog-content\") pod \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.093325 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xg8s6\" (UniqueName: \"kubernetes.io/projected/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-kube-api-access-xg8s6\") pod \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.093467 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-utilities\") pod \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\" (UID: \"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e\") " Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.094292 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-utilities" (OuterVolumeSpecName: "utilities") pod "8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" (UID: "8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.101408 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-kube-api-access-xg8s6" (OuterVolumeSpecName: "kube-api-access-xg8s6") pod "8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" (UID: "8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e"). InnerVolumeSpecName "kube-api-access-xg8s6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.149174 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" (UID: "8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.195558 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.195598 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xg8s6\" (UniqueName: \"kubernetes.io/projected/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-kube-api-access-xg8s6\") on node \"crc\" DevicePath \"\"" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.195614 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.613674 4786 generic.go:334] "Generic (PLEG): container finished" podID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerID="d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4" exitCode=0 Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.613717 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flf9t" event={"ID":"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e","Type":"ContainerDied","Data":"d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4"} Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.613745 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-flf9t" event={"ID":"8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e","Type":"ContainerDied","Data":"4fc95c4c0742d7472faf52812cf9769546e366b6a6fb821d6445518b33202985"} Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.613764 4786 scope.go:117] "RemoveContainer" containerID="d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.613885 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-flf9t" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.642547 4786 scope.go:117] "RemoveContainer" containerID="56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.644793 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-flf9t"] Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.666047 4786 scope.go:117] "RemoveContainer" containerID="5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.672771 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-flf9t"] Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.709766 4786 scope.go:117] "RemoveContainer" containerID="d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4" Dec 09 09:41:51 crc kubenswrapper[4786]: E1209 09:41:51.710195 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4\": container with ID starting with d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4 not found: ID does not exist" containerID="d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.710228 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4"} err="failed to get container status \"d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4\": rpc error: code = NotFound desc = could not find container \"d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4\": container with ID starting with d0ebd3735408fcddd471c482d49b2b50f77bfb00adbf8c258fde484d74acbce4 not found: ID does not exist" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.710250 4786 scope.go:117] "RemoveContainer" containerID="56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b" Dec 09 09:41:51 crc kubenswrapper[4786]: E1209 09:41:51.710554 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b\": container with ID starting with 56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b not found: ID does not exist" containerID="56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.710614 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b"} err="failed to get container status \"56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b\": rpc error: code = NotFound desc = could not find container \"56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b\": container with ID starting with 56ea6fed1fc1dd2fda15eba96b332471f11a9096b66bffeb9ff88268927fc26b not found: ID does not exist" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.710647 4786 scope.go:117] "RemoveContainer" containerID="5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160" Dec 09 09:41:51 crc kubenswrapper[4786]: E1209 09:41:51.711092 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160\": container with ID starting with 5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160 not found: ID does not exist" containerID="5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160" Dec 09 09:41:51 crc kubenswrapper[4786]: I1209 09:41:51.711123 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160"} err="failed to get container status \"5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160\": rpc error: code = NotFound desc = could not find container \"5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160\": container with ID starting with 5ca2ad137bb8aeb486dad945827708beffaa3b22b7686862661e691b006c2160 not found: ID does not exist" Dec 09 09:41:53 crc kubenswrapper[4786]: I1209 09:41:53.200006 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" path="/var/lib/kubelet/pods/8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e/volumes" Dec 09 09:43:24 crc kubenswrapper[4786]: I1209 09:43:24.988583 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:43:24 crc kubenswrapper[4786]: I1209 09:43:24.989037 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:43:54 crc kubenswrapper[4786]: I1209 09:43:54.988789 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:43:54 crc kubenswrapper[4786]: I1209 09:43:54.989397 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:44:24 crc kubenswrapper[4786]: I1209 09:44:24.992887 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:44:24 crc kubenswrapper[4786]: I1209 09:44:24.993313 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:44:24 crc kubenswrapper[4786]: I1209 09:44:24.993376 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:44:24 crc kubenswrapper[4786]: I1209 09:44:24.994136 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a2f7ae2df4c18e1c640c471c7d33939b3ee78e8c89c09d9ea3fdd81d174a86b0"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:44:24 crc kubenswrapper[4786]: I1209 09:44:24.994183 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://a2f7ae2df4c18e1c640c471c7d33939b3ee78e8c89c09d9ea3fdd81d174a86b0" gracePeriod=600 Dec 09 09:44:25 crc kubenswrapper[4786]: I1209 09:44:25.379931 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="a2f7ae2df4c18e1c640c471c7d33939b3ee78e8c89c09d9ea3fdd81d174a86b0" exitCode=0 Dec 09 09:44:25 crc kubenswrapper[4786]: I1209 09:44:25.379997 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"a2f7ae2df4c18e1c640c471c7d33939b3ee78e8c89c09d9ea3fdd81d174a86b0"} Dec 09 09:44:25 crc kubenswrapper[4786]: I1209 09:44:25.380222 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca"} Dec 09 09:44:25 crc kubenswrapper[4786]: I1209 09:44:25.380263 4786 scope.go:117] "RemoveContainer" containerID="0db36b24a99300a74d428e6b1f2b5b5dde9789633e3d8e42840c3f61dd5ac4ef" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.217757 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp"] Dec 09 09:45:00 crc kubenswrapper[4786]: E1209 09:45:00.218582 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerName="extract-content" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.218595 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerName="extract-content" Dec 09 09:45:00 crc kubenswrapper[4786]: E1209 09:45:00.218623 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerName="extract-utilities" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.218630 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerName="extract-utilities" Dec 09 09:45:00 crc kubenswrapper[4786]: E1209 09:45:00.218645 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerName="registry-server" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.218651 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerName="registry-server" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.218835 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e7f2e6e-0b84-43f2-a6b0-72bbbb441a9e" containerName="registry-server" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.219590 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.222675 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.232242 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp"] Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.233165 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.356040 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn2l8\" (UniqueName: \"kubernetes.io/projected/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-kube-api-access-pn2l8\") pod \"collect-profiles-29421225-zghqp\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.356323 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-secret-volume\") pod \"collect-profiles-29421225-zghqp\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.356663 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-config-volume\") pod \"collect-profiles-29421225-zghqp\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.458648 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn2l8\" (UniqueName: \"kubernetes.io/projected/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-kube-api-access-pn2l8\") pod \"collect-profiles-29421225-zghqp\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.458765 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-secret-volume\") pod \"collect-profiles-29421225-zghqp\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.458889 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-config-volume\") pod \"collect-profiles-29421225-zghqp\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.459847 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-config-volume\") pod \"collect-profiles-29421225-zghqp\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.469233 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-secret-volume\") pod \"collect-profiles-29421225-zghqp\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.481378 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn2l8\" (UniqueName: \"kubernetes.io/projected/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-kube-api-access-pn2l8\") pod \"collect-profiles-29421225-zghqp\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:00 crc kubenswrapper[4786]: I1209 09:45:00.541101 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:01 crc kubenswrapper[4786]: I1209 09:45:01.087273 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp"] Dec 09 09:45:01 crc kubenswrapper[4786]: I1209 09:45:01.820955 4786 generic.go:334] "Generic (PLEG): container finished" podID="ddbd8b34-5d98-4bc2-8fe0-79f41393b234" containerID="742dc26ce1d658a64df890d0d3809112de15092e3c69b913afd5b8dee764e83f" exitCode=0 Dec 09 09:45:01 crc kubenswrapper[4786]: I1209 09:45:01.821064 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" event={"ID":"ddbd8b34-5d98-4bc2-8fe0-79f41393b234","Type":"ContainerDied","Data":"742dc26ce1d658a64df890d0d3809112de15092e3c69b913afd5b8dee764e83f"} Dec 09 09:45:01 crc kubenswrapper[4786]: I1209 09:45:01.821245 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" event={"ID":"ddbd8b34-5d98-4bc2-8fe0-79f41393b234","Type":"ContainerStarted","Data":"8e52a26e52ac2c13265af70466a09c4efcbcea858988f5b93e5701904b213efd"} Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.168854 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.326783 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-secret-volume\") pod \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.326957 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-config-volume\") pod \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.327083 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn2l8\" (UniqueName: \"kubernetes.io/projected/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-kube-api-access-pn2l8\") pod \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\" (UID: \"ddbd8b34-5d98-4bc2-8fe0-79f41393b234\") " Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.327719 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-config-volume" (OuterVolumeSpecName: "config-volume") pod "ddbd8b34-5d98-4bc2-8fe0-79f41393b234" (UID: "ddbd8b34-5d98-4bc2-8fe0-79f41393b234"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.328837 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.332577 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-kube-api-access-pn2l8" (OuterVolumeSpecName: "kube-api-access-pn2l8") pod "ddbd8b34-5d98-4bc2-8fe0-79f41393b234" (UID: "ddbd8b34-5d98-4bc2-8fe0-79f41393b234"). InnerVolumeSpecName "kube-api-access-pn2l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.338607 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ddbd8b34-5d98-4bc2-8fe0-79f41393b234" (UID: "ddbd8b34-5d98-4bc2-8fe0-79f41393b234"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.431354 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn2l8\" (UniqueName: \"kubernetes.io/projected/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-kube-api-access-pn2l8\") on node \"crc\" DevicePath \"\"" Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.431677 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ddbd8b34-5d98-4bc2-8fe0-79f41393b234-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.839617 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" event={"ID":"ddbd8b34-5d98-4bc2-8fe0-79f41393b234","Type":"ContainerDied","Data":"8e52a26e52ac2c13265af70466a09c4efcbcea858988f5b93e5701904b213efd"} Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.839656 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e52a26e52ac2c13265af70466a09c4efcbcea858988f5b93e5701904b213efd" Dec 09 09:45:03 crc kubenswrapper[4786]: I1209 09:45:03.839660 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp" Dec 09 09:45:04 crc kubenswrapper[4786]: I1209 09:45:04.254373 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw"] Dec 09 09:45:04 crc kubenswrapper[4786]: I1209 09:45:04.263597 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421180-f7qgw"] Dec 09 09:45:05 crc kubenswrapper[4786]: I1209 09:45:05.204678 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c3a02ac-5823-4ccc-a9e2-47485a972d77" path="/var/lib/kubelet/pods/2c3a02ac-5823-4ccc-a9e2-47485a972d77/volumes" Dec 09 09:45:05 crc kubenswrapper[4786]: I1209 09:45:05.213624 4786 scope.go:117] "RemoveContainer" containerID="1392f76059e3d1028df92301bee2095fcb686153c145011c253c13882c529417" Dec 09 09:45:30 crc kubenswrapper[4786]: I1209 09:45:30.980296 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tckwp"] Dec 09 09:45:30 crc kubenswrapper[4786]: E1209 09:45:30.981322 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddbd8b34-5d98-4bc2-8fe0-79f41393b234" containerName="collect-profiles" Dec 09 09:45:30 crc kubenswrapper[4786]: I1209 09:45:30.981339 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddbd8b34-5d98-4bc2-8fe0-79f41393b234" containerName="collect-profiles" Dec 09 09:45:30 crc kubenswrapper[4786]: I1209 09:45:30.981653 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddbd8b34-5d98-4bc2-8fe0-79f41393b234" containerName="collect-profiles" Dec 09 09:45:30 crc kubenswrapper[4786]: I1209 09:45:30.983591 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:30 crc kubenswrapper[4786]: I1209 09:45:30.997011 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tckwp"] Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.130986 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-catalog-content\") pod \"certified-operators-tckwp\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.131060 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-utilities\") pod \"certified-operators-tckwp\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.131136 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dcwc\" (UniqueName: \"kubernetes.io/projected/199946be-04b2-48e3-a6e2-d02c620a1253-kube-api-access-8dcwc\") pod \"certified-operators-tckwp\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.233520 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-catalog-content\") pod \"certified-operators-tckwp\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.233624 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-utilities\") pod \"certified-operators-tckwp\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.233725 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dcwc\" (UniqueName: \"kubernetes.io/projected/199946be-04b2-48e3-a6e2-d02c620a1253-kube-api-access-8dcwc\") pod \"certified-operators-tckwp\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.234301 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-utilities\") pod \"certified-operators-tckwp\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.234310 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-catalog-content\") pod \"certified-operators-tckwp\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.261396 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dcwc\" (UniqueName: \"kubernetes.io/projected/199946be-04b2-48e3-a6e2-d02c620a1253-kube-api-access-8dcwc\") pod \"certified-operators-tckwp\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.321290 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:31 crc kubenswrapper[4786]: I1209 09:45:31.913480 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tckwp"] Dec 09 09:45:32 crc kubenswrapper[4786]: I1209 09:45:32.109659 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tckwp" event={"ID":"199946be-04b2-48e3-a6e2-d02c620a1253","Type":"ContainerStarted","Data":"8a4d4f4ffc33801ac46f4a99632aca8b59c70476636b46e52173a2641b5c9a06"} Dec 09 09:45:33 crc kubenswrapper[4786]: I1209 09:45:33.122231 4786 generic.go:334] "Generic (PLEG): container finished" podID="199946be-04b2-48e3-a6e2-d02c620a1253" containerID="a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5" exitCode=0 Dec 09 09:45:33 crc kubenswrapper[4786]: I1209 09:45:33.122377 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tckwp" event={"ID":"199946be-04b2-48e3-a6e2-d02c620a1253","Type":"ContainerDied","Data":"a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5"} Dec 09 09:45:34 crc kubenswrapper[4786]: I1209 09:45:34.150806 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tckwp" event={"ID":"199946be-04b2-48e3-a6e2-d02c620a1253","Type":"ContainerStarted","Data":"33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4"} Dec 09 09:45:36 crc kubenswrapper[4786]: I1209 09:45:36.168928 4786 generic.go:334] "Generic (PLEG): container finished" podID="199946be-04b2-48e3-a6e2-d02c620a1253" containerID="33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4" exitCode=0 Dec 09 09:45:36 crc kubenswrapper[4786]: I1209 09:45:36.169023 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tckwp" event={"ID":"199946be-04b2-48e3-a6e2-d02c620a1253","Type":"ContainerDied","Data":"33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4"} Dec 09 09:45:38 crc kubenswrapper[4786]: I1209 09:45:38.189781 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tckwp" event={"ID":"199946be-04b2-48e3-a6e2-d02c620a1253","Type":"ContainerStarted","Data":"b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da"} Dec 09 09:45:38 crc kubenswrapper[4786]: I1209 09:45:38.216142 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tckwp" podStartSLOduration=4.344238783 podStartE2EDuration="8.216122615s" podCreationTimestamp="2025-12-09 09:45:30 +0000 UTC" firstStartedPulling="2025-12-09 09:45:33.126397518 +0000 UTC m=+3699.010018734" lastFinishedPulling="2025-12-09 09:45:36.99828133 +0000 UTC m=+3702.881902566" observedRunningTime="2025-12-09 09:45:38.213000888 +0000 UTC m=+3704.096622124" watchObservedRunningTime="2025-12-09 09:45:38.216122615 +0000 UTC m=+3704.099743841" Dec 09 09:45:41 crc kubenswrapper[4786]: I1209 09:45:41.321362 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:41 crc kubenswrapper[4786]: I1209 09:45:41.321744 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:41 crc kubenswrapper[4786]: I1209 09:45:41.399625 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:42 crc kubenswrapper[4786]: I1209 09:45:42.301317 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:42 crc kubenswrapper[4786]: I1209 09:45:42.370922 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tckwp"] Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.260378 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tckwp" podUID="199946be-04b2-48e3-a6e2-d02c620a1253" containerName="registry-server" containerID="cri-o://b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da" gracePeriod=2 Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.766460 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.891907 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-catalog-content\") pod \"199946be-04b2-48e3-a6e2-d02c620a1253\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.892311 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dcwc\" (UniqueName: \"kubernetes.io/projected/199946be-04b2-48e3-a6e2-d02c620a1253-kube-api-access-8dcwc\") pod \"199946be-04b2-48e3-a6e2-d02c620a1253\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.892592 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-utilities\") pod \"199946be-04b2-48e3-a6e2-d02c620a1253\" (UID: \"199946be-04b2-48e3-a6e2-d02c620a1253\") " Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.893235 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-utilities" (OuterVolumeSpecName: "utilities") pod "199946be-04b2-48e3-a6e2-d02c620a1253" (UID: "199946be-04b2-48e3-a6e2-d02c620a1253"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.898844 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/199946be-04b2-48e3-a6e2-d02c620a1253-kube-api-access-8dcwc" (OuterVolumeSpecName: "kube-api-access-8dcwc") pod "199946be-04b2-48e3-a6e2-d02c620a1253" (UID: "199946be-04b2-48e3-a6e2-d02c620a1253"). InnerVolumeSpecName "kube-api-access-8dcwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.952791 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "199946be-04b2-48e3-a6e2-d02c620a1253" (UID: "199946be-04b2-48e3-a6e2-d02c620a1253"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.997182 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.997211 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/199946be-04b2-48e3-a6e2-d02c620a1253-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:45:44 crc kubenswrapper[4786]: I1209 09:45:44.997226 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dcwc\" (UniqueName: \"kubernetes.io/projected/199946be-04b2-48e3-a6e2-d02c620a1253-kube-api-access-8dcwc\") on node \"crc\" DevicePath \"\"" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.273713 4786 generic.go:334] "Generic (PLEG): container finished" podID="199946be-04b2-48e3-a6e2-d02c620a1253" containerID="b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da" exitCode=0 Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.273776 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tckwp" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.273781 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tckwp" event={"ID":"199946be-04b2-48e3-a6e2-d02c620a1253","Type":"ContainerDied","Data":"b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da"} Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.273832 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tckwp" event={"ID":"199946be-04b2-48e3-a6e2-d02c620a1253","Type":"ContainerDied","Data":"8a4d4f4ffc33801ac46f4a99632aca8b59c70476636b46e52173a2641b5c9a06"} Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.273856 4786 scope.go:117] "RemoveContainer" containerID="b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.306835 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tckwp"] Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.308953 4786 scope.go:117] "RemoveContainer" containerID="33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.316867 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tckwp"] Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.333963 4786 scope.go:117] "RemoveContainer" containerID="a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.400299 4786 scope.go:117] "RemoveContainer" containerID="b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da" Dec 09 09:45:45 crc kubenswrapper[4786]: E1209 09:45:45.400694 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da\": container with ID starting with b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da not found: ID does not exist" containerID="b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.400730 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da"} err="failed to get container status \"b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da\": rpc error: code = NotFound desc = could not find container \"b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da\": container with ID starting with b35b90ef16ec409a8e6eba17e2614419748634b8f4003b5c9af52bb4f4b413da not found: ID does not exist" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.400752 4786 scope.go:117] "RemoveContainer" containerID="33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4" Dec 09 09:45:45 crc kubenswrapper[4786]: E1209 09:45:45.401215 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4\": container with ID starting with 33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4 not found: ID does not exist" containerID="33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.401261 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4"} err="failed to get container status \"33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4\": rpc error: code = NotFound desc = could not find container \"33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4\": container with ID starting with 33dd63f151dddfde75f0da2fa73bc646337f3180de5fda8e7589a1afd17422c4 not found: ID does not exist" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.401297 4786 scope.go:117] "RemoveContainer" containerID="a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5" Dec 09 09:45:45 crc kubenswrapper[4786]: E1209 09:45:45.401641 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5\": container with ID starting with a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5 not found: ID does not exist" containerID="a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5" Dec 09 09:45:45 crc kubenswrapper[4786]: I1209 09:45:45.401665 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5"} err="failed to get container status \"a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5\": rpc error: code = NotFound desc = could not find container \"a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5\": container with ID starting with a0bb4dc76ff6c4849f18d11f7a37bce8c409c45cbcef260f4d01c75329fc72a5 not found: ID does not exist" Dec 09 09:45:47 crc kubenswrapper[4786]: I1209 09:45:47.198676 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="199946be-04b2-48e3-a6e2-d02c620a1253" path="/var/lib/kubelet/pods/199946be-04b2-48e3-a6e2-d02c620a1253/volumes" Dec 09 09:46:54 crc kubenswrapper[4786]: I1209 09:46:54.989153 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:46:54 crc kubenswrapper[4786]: I1209 09:46:54.989589 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:47:24 crc kubenswrapper[4786]: I1209 09:47:24.989117 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:47:24 crc kubenswrapper[4786]: I1209 09:47:24.989700 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:47:54 crc kubenswrapper[4786]: I1209 09:47:54.988796 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:47:54 crc kubenswrapper[4786]: I1209 09:47:54.989245 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:47:54 crc kubenswrapper[4786]: I1209 09:47:54.989290 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:47:54 crc kubenswrapper[4786]: I1209 09:47:54.990195 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:47:54 crc kubenswrapper[4786]: I1209 09:47:54.990326 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" gracePeriod=600 Dec 09 09:47:55 crc kubenswrapper[4786]: E1209 09:47:55.117760 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:47:55 crc kubenswrapper[4786]: I1209 09:47:55.524798 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" exitCode=0 Dec 09 09:47:55 crc kubenswrapper[4786]: I1209 09:47:55.524844 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca"} Dec 09 09:47:55 crc kubenswrapper[4786]: I1209 09:47:55.524902 4786 scope.go:117] "RemoveContainer" containerID="a2f7ae2df4c18e1c640c471c7d33939b3ee78e8c89c09d9ea3fdd81d174a86b0" Dec 09 09:47:55 crc kubenswrapper[4786]: I1209 09:47:55.525902 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:47:55 crc kubenswrapper[4786]: E1209 09:47:55.528120 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:48:08 crc kubenswrapper[4786]: I1209 09:48:08.188556 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:48:08 crc kubenswrapper[4786]: E1209 09:48:08.190122 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:48:21 crc kubenswrapper[4786]: I1209 09:48:21.189674 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:48:21 crc kubenswrapper[4786]: E1209 09:48:21.190570 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:48:32 crc kubenswrapper[4786]: I1209 09:48:32.265458 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:48:32 crc kubenswrapper[4786]: E1209 09:48:32.266509 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:48:44 crc kubenswrapper[4786]: I1209 09:48:44.188529 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:48:44 crc kubenswrapper[4786]: E1209 09:48:44.189349 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:48:57 crc kubenswrapper[4786]: I1209 09:48:57.189336 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:48:57 crc kubenswrapper[4786]: E1209 09:48:57.190131 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:49:08 crc kubenswrapper[4786]: I1209 09:49:08.188898 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:49:08 crc kubenswrapper[4786]: E1209 09:49:08.189713 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:49:22 crc kubenswrapper[4786]: I1209 09:49:22.188410 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:49:22 crc kubenswrapper[4786]: E1209 09:49:22.189205 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:49:35 crc kubenswrapper[4786]: I1209 09:49:35.196400 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:49:35 crc kubenswrapper[4786]: E1209 09:49:35.197276 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:49:41 crc kubenswrapper[4786]: I1209 09:49:41.977349 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x62ds"] Dec 09 09:49:41 crc kubenswrapper[4786]: E1209 09:49:41.978458 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199946be-04b2-48e3-a6e2-d02c620a1253" containerName="extract-content" Dec 09 09:49:41 crc kubenswrapper[4786]: I1209 09:49:41.978477 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="199946be-04b2-48e3-a6e2-d02c620a1253" containerName="extract-content" Dec 09 09:49:41 crc kubenswrapper[4786]: E1209 09:49:41.978491 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199946be-04b2-48e3-a6e2-d02c620a1253" containerName="registry-server" Dec 09 09:49:41 crc kubenswrapper[4786]: I1209 09:49:41.978498 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="199946be-04b2-48e3-a6e2-d02c620a1253" containerName="registry-server" Dec 09 09:49:41 crc kubenswrapper[4786]: E1209 09:49:41.978509 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="199946be-04b2-48e3-a6e2-d02c620a1253" containerName="extract-utilities" Dec 09 09:49:41 crc kubenswrapper[4786]: I1209 09:49:41.978518 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="199946be-04b2-48e3-a6e2-d02c620a1253" containerName="extract-utilities" Dec 09 09:49:41 crc kubenswrapper[4786]: I1209 09:49:41.978816 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="199946be-04b2-48e3-a6e2-d02c620a1253" containerName="registry-server" Dec 09 09:49:41 crc kubenswrapper[4786]: I1209 09:49:41.980953 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.005767 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x62ds"] Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.134321 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-catalog-content\") pod \"redhat-operators-x62ds\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.134380 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-utilities\") pod \"redhat-operators-x62ds\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.134583 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpr8b\" (UniqueName: \"kubernetes.io/projected/fcefd4b8-5c36-47f8-942d-2d3976aebb93-kube-api-access-cpr8b\") pod \"redhat-operators-x62ds\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.236703 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-catalog-content\") pod \"redhat-operators-x62ds\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.236780 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-utilities\") pod \"redhat-operators-x62ds\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.236884 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpr8b\" (UniqueName: \"kubernetes.io/projected/fcefd4b8-5c36-47f8-942d-2d3976aebb93-kube-api-access-cpr8b\") pod \"redhat-operators-x62ds\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.237700 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-catalog-content\") pod \"redhat-operators-x62ds\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.237932 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-utilities\") pod \"redhat-operators-x62ds\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.256925 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpr8b\" (UniqueName: \"kubernetes.io/projected/fcefd4b8-5c36-47f8-942d-2d3976aebb93-kube-api-access-cpr8b\") pod \"redhat-operators-x62ds\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.308167 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:42 crc kubenswrapper[4786]: I1209 09:49:42.795965 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x62ds"] Dec 09 09:49:43 crc kubenswrapper[4786]: I1209 09:49:43.755901 4786 generic.go:334] "Generic (PLEG): container finished" podID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerID="3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c" exitCode=0 Dec 09 09:49:43 crc kubenswrapper[4786]: I1209 09:49:43.756317 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x62ds" event={"ID":"fcefd4b8-5c36-47f8-942d-2d3976aebb93","Type":"ContainerDied","Data":"3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c"} Dec 09 09:49:43 crc kubenswrapper[4786]: I1209 09:49:43.756353 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x62ds" event={"ID":"fcefd4b8-5c36-47f8-942d-2d3976aebb93","Type":"ContainerStarted","Data":"c48b3dcd2ad066fe7533d91d20621fe563b4a0779a24586df9681fc68db1f36d"} Dec 09 09:49:43 crc kubenswrapper[4786]: I1209 09:49:43.759047 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:49:45 crc kubenswrapper[4786]: I1209 09:49:45.783749 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x62ds" event={"ID":"fcefd4b8-5c36-47f8-942d-2d3976aebb93","Type":"ContainerStarted","Data":"7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42"} Dec 09 09:49:47 crc kubenswrapper[4786]: I1209 09:49:47.813979 4786 generic.go:334] "Generic (PLEG): container finished" podID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerID="7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42" exitCode=0 Dec 09 09:49:47 crc kubenswrapper[4786]: I1209 09:49:47.814053 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x62ds" event={"ID":"fcefd4b8-5c36-47f8-942d-2d3976aebb93","Type":"ContainerDied","Data":"7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42"} Dec 09 09:49:48 crc kubenswrapper[4786]: I1209 09:49:48.834516 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x62ds" event={"ID":"fcefd4b8-5c36-47f8-942d-2d3976aebb93","Type":"ContainerStarted","Data":"881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f"} Dec 09 09:49:48 crc kubenswrapper[4786]: I1209 09:49:48.863927 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x62ds" podStartSLOduration=3.295972651 podStartE2EDuration="7.863901587s" podCreationTimestamp="2025-12-09 09:49:41 +0000 UTC" firstStartedPulling="2025-12-09 09:49:43.758801994 +0000 UTC m=+3949.642423220" lastFinishedPulling="2025-12-09 09:49:48.32673093 +0000 UTC m=+3954.210352156" observedRunningTime="2025-12-09 09:49:48.857346008 +0000 UTC m=+3954.740967234" watchObservedRunningTime="2025-12-09 09:49:48.863901587 +0000 UTC m=+3954.747522813" Dec 09 09:49:50 crc kubenswrapper[4786]: I1209 09:49:50.187685 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:49:50 crc kubenswrapper[4786]: E1209 09:49:50.188271 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:49:52 crc kubenswrapper[4786]: I1209 09:49:52.308635 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:52 crc kubenswrapper[4786]: I1209 09:49:52.309218 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:49:53 crc kubenswrapper[4786]: I1209 09:49:53.351244 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x62ds" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerName="registry-server" probeResult="failure" output=< Dec 09 09:49:53 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 09:49:53 crc kubenswrapper[4786]: > Dec 09 09:50:02 crc kubenswrapper[4786]: I1209 09:50:02.375856 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:50:02 crc kubenswrapper[4786]: I1209 09:50:02.426032 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:50:03 crc kubenswrapper[4786]: I1209 09:50:03.538265 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x62ds"] Dec 09 09:50:03 crc kubenswrapper[4786]: I1209 09:50:03.982699 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x62ds" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerName="registry-server" containerID="cri-o://881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f" gracePeriod=2 Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.189769 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:50:04 crc kubenswrapper[4786]: E1209 09:50:04.192855 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.489409 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.583932 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-utilities\") pod \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.584116 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpr8b\" (UniqueName: \"kubernetes.io/projected/fcefd4b8-5c36-47f8-942d-2d3976aebb93-kube-api-access-cpr8b\") pod \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.584256 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-catalog-content\") pod \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\" (UID: \"fcefd4b8-5c36-47f8-942d-2d3976aebb93\") " Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.585280 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-utilities" (OuterVolumeSpecName: "utilities") pod "fcefd4b8-5c36-47f8-942d-2d3976aebb93" (UID: "fcefd4b8-5c36-47f8-942d-2d3976aebb93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.590598 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcefd4b8-5c36-47f8-942d-2d3976aebb93-kube-api-access-cpr8b" (OuterVolumeSpecName: "kube-api-access-cpr8b") pod "fcefd4b8-5c36-47f8-942d-2d3976aebb93" (UID: "fcefd4b8-5c36-47f8-942d-2d3976aebb93"). InnerVolumeSpecName "kube-api-access-cpr8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.686707 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpr8b\" (UniqueName: \"kubernetes.io/projected/fcefd4b8-5c36-47f8-942d-2d3976aebb93-kube-api-access-cpr8b\") on node \"crc\" DevicePath \"\"" Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.686981 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.716929 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fcefd4b8-5c36-47f8-942d-2d3976aebb93" (UID: "fcefd4b8-5c36-47f8-942d-2d3976aebb93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.788997 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fcefd4b8-5c36-47f8-942d-2d3976aebb93-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.995953 4786 generic.go:334] "Generic (PLEG): container finished" podID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerID="881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f" exitCode=0 Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.996004 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x62ds" event={"ID":"fcefd4b8-5c36-47f8-942d-2d3976aebb93","Type":"ContainerDied","Data":"881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f"} Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.996039 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x62ds" event={"ID":"fcefd4b8-5c36-47f8-942d-2d3976aebb93","Type":"ContainerDied","Data":"c48b3dcd2ad066fe7533d91d20621fe563b4a0779a24586df9681fc68db1f36d"} Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.996020 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x62ds" Dec 09 09:50:04 crc kubenswrapper[4786]: I1209 09:50:04.996054 4786 scope.go:117] "RemoveContainer" containerID="881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f" Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.027153 4786 scope.go:117] "RemoveContainer" containerID="7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42" Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.037853 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x62ds"] Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.048350 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x62ds"] Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.066472 4786 scope.go:117] "RemoveContainer" containerID="3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c" Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.102344 4786 scope.go:117] "RemoveContainer" containerID="881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f" Dec 09 09:50:05 crc kubenswrapper[4786]: E1209 09:50:05.102761 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f\": container with ID starting with 881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f not found: ID does not exist" containerID="881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f" Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.102817 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f"} err="failed to get container status \"881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f\": rpc error: code = NotFound desc = could not find container \"881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f\": container with ID starting with 881138ad831c938fc31a93a8492ba76d23a0aac18572d585c4b25c2b6b08d27f not found: ID does not exist" Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.102853 4786 scope.go:117] "RemoveContainer" containerID="7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42" Dec 09 09:50:05 crc kubenswrapper[4786]: E1209 09:50:05.103135 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42\": container with ID starting with 7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42 not found: ID does not exist" containerID="7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42" Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.103171 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42"} err="failed to get container status \"7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42\": rpc error: code = NotFound desc = could not find container \"7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42\": container with ID starting with 7b6b7d2b1c4edcdab33f4b793ed2635c6233f9a237d44c22c933310aa5009f42 not found: ID does not exist" Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.103203 4786 scope.go:117] "RemoveContainer" containerID="3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c" Dec 09 09:50:05 crc kubenswrapper[4786]: E1209 09:50:05.103628 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c\": container with ID starting with 3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c not found: ID does not exist" containerID="3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c" Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.103654 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c"} err="failed to get container status \"3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c\": rpc error: code = NotFound desc = could not find container \"3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c\": container with ID starting with 3142fcc8868e60c91374b71de10926030f212ac4a16b67608a9e73a035e8b29c not found: ID does not exist" Dec 09 09:50:05 crc kubenswrapper[4786]: I1209 09:50:05.200678 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" path="/var/lib/kubelet/pods/fcefd4b8-5c36-47f8-942d-2d3976aebb93/volumes" Dec 09 09:50:17 crc kubenswrapper[4786]: I1209 09:50:17.189743 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:50:17 crc kubenswrapper[4786]: E1209 09:50:17.195255 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:50:29 crc kubenswrapper[4786]: I1209 09:50:29.189193 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:50:29 crc kubenswrapper[4786]: E1209 09:50:29.189985 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:50:42 crc kubenswrapper[4786]: I1209 09:50:42.188548 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:50:42 crc kubenswrapper[4786]: E1209 09:50:42.189650 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:50:56 crc kubenswrapper[4786]: I1209 09:50:56.189775 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:50:56 crc kubenswrapper[4786]: E1209 09:50:56.191732 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:51:10 crc kubenswrapper[4786]: I1209 09:51:10.188219 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:51:10 crc kubenswrapper[4786]: E1209 09:51:10.189112 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:51:25 crc kubenswrapper[4786]: I1209 09:51:25.196202 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:51:25 crc kubenswrapper[4786]: E1209 09:51:25.197086 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:51:37 crc kubenswrapper[4786]: I1209 09:51:37.188535 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:51:37 crc kubenswrapper[4786]: E1209 09:51:37.189339 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:51:51 crc kubenswrapper[4786]: I1209 09:51:51.189469 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:51:51 crc kubenswrapper[4786]: E1209 09:51:51.190383 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:52:03 crc kubenswrapper[4786]: I1209 09:52:03.203447 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:52:03 crc kubenswrapper[4786]: E1209 09:52:03.204649 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:52:17 crc kubenswrapper[4786]: I1209 09:52:17.189028 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:52:17 crc kubenswrapper[4786]: E1209 09:52:17.189818 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:52:31 crc kubenswrapper[4786]: I1209 09:52:31.188500 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:52:31 crc kubenswrapper[4786]: E1209 09:52:31.189319 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:52:45 crc kubenswrapper[4786]: I1209 09:52:45.196137 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:52:45 crc kubenswrapper[4786]: E1209 09:52:45.209150 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:52:56 crc kubenswrapper[4786]: I1209 09:52:56.188502 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:52:56 crc kubenswrapper[4786]: I1209 09:52:56.845106 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"082596eaad7fb08cbce4ef109bfafc4c289a07b317bbbd756a929b7a60a2dd34"} Dec 09 09:55:24 crc kubenswrapper[4786]: I1209 09:55:24.988850 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:55:24 crc kubenswrapper[4786]: I1209 09:55:24.989491 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:55:54 crc kubenswrapper[4786]: I1209 09:55:54.989178 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:55:54 crc kubenswrapper[4786]: I1209 09:55:54.989902 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:56:24 crc kubenswrapper[4786]: I1209 09:56:24.988872 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:56:24 crc kubenswrapper[4786]: I1209 09:56:24.989617 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:56:24 crc kubenswrapper[4786]: I1209 09:56:24.989710 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:56:24 crc kubenswrapper[4786]: I1209 09:56:24.991175 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"082596eaad7fb08cbce4ef109bfafc4c289a07b317bbbd756a929b7a60a2dd34"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:56:24 crc kubenswrapper[4786]: I1209 09:56:24.991335 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://082596eaad7fb08cbce4ef109bfafc4c289a07b317bbbd756a929b7a60a2dd34" gracePeriod=600 Dec 09 09:56:26 crc kubenswrapper[4786]: I1209 09:56:26.169814 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="082596eaad7fb08cbce4ef109bfafc4c289a07b317bbbd756a929b7a60a2dd34" exitCode=0 Dec 09 09:56:26 crc kubenswrapper[4786]: I1209 09:56:26.169902 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"082596eaad7fb08cbce4ef109bfafc4c289a07b317bbbd756a929b7a60a2dd34"} Dec 09 09:56:26 crc kubenswrapper[4786]: I1209 09:56:26.170185 4786 scope.go:117] "RemoveContainer" containerID="3ce240c7987c6f5a10c2a769153c5b9c34dfd722a2d08939b800b5adb90cfdca" Dec 09 09:56:27 crc kubenswrapper[4786]: I1209 09:56:27.182340 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c"} Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.118029 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hbfkl"] Dec 09 09:56:33 crc kubenswrapper[4786]: E1209 09:56:33.119334 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerName="extract-content" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.119470 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerName="extract-content" Dec 09 09:56:33 crc kubenswrapper[4786]: E1209 09:56:33.119497 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerName="registry-server" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.119505 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerName="registry-server" Dec 09 09:56:33 crc kubenswrapper[4786]: E1209 09:56:33.119555 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerName="extract-utilities" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.119564 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerName="extract-utilities" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.119802 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcefd4b8-5c36-47f8-942d-2d3976aebb93" containerName="registry-server" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.121820 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.134099 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbfkl"] Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.242931 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-catalog-content\") pod \"certified-operators-hbfkl\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.243025 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-utilities\") pod \"certified-operators-hbfkl\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.243139 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wm9b6\" (UniqueName: \"kubernetes.io/projected/2351afc0-5e44-4794-93e6-613466be2524-kube-api-access-wm9b6\") pod \"certified-operators-hbfkl\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.345236 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-catalog-content\") pod \"certified-operators-hbfkl\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.345384 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-utilities\") pod \"certified-operators-hbfkl\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.345407 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wm9b6\" (UniqueName: \"kubernetes.io/projected/2351afc0-5e44-4794-93e6-613466be2524-kube-api-access-wm9b6\") pod \"certified-operators-hbfkl\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.346230 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-catalog-content\") pod \"certified-operators-hbfkl\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.346658 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-utilities\") pod \"certified-operators-hbfkl\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.380417 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wm9b6\" (UniqueName: \"kubernetes.io/projected/2351afc0-5e44-4794-93e6-613466be2524-kube-api-access-wm9b6\") pod \"certified-operators-hbfkl\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:33 crc kubenswrapper[4786]: I1209 09:56:33.443235 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:34 crc kubenswrapper[4786]: I1209 09:56:34.006018 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hbfkl"] Dec 09 09:56:34 crc kubenswrapper[4786]: I1209 09:56:34.248483 4786 generic.go:334] "Generic (PLEG): container finished" podID="2351afc0-5e44-4794-93e6-613466be2524" containerID="e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00" exitCode=0 Dec 09 09:56:34 crc kubenswrapper[4786]: I1209 09:56:34.248566 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfkl" event={"ID":"2351afc0-5e44-4794-93e6-613466be2524","Type":"ContainerDied","Data":"e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00"} Dec 09 09:56:34 crc kubenswrapper[4786]: I1209 09:56:34.248839 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfkl" event={"ID":"2351afc0-5e44-4794-93e6-613466be2524","Type":"ContainerStarted","Data":"7b3d4ec397457136212b873d83d8cae21e6346d95108888bb8b117786a185260"} Dec 09 09:56:34 crc kubenswrapper[4786]: I1209 09:56:34.251227 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 09:56:35 crc kubenswrapper[4786]: I1209 09:56:35.259284 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfkl" event={"ID":"2351afc0-5e44-4794-93e6-613466be2524","Type":"ContainerStarted","Data":"88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8"} Dec 09 09:56:36 crc kubenswrapper[4786]: I1209 09:56:36.270971 4786 generic.go:334] "Generic (PLEG): container finished" podID="2351afc0-5e44-4794-93e6-613466be2524" containerID="88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8" exitCode=0 Dec 09 09:56:36 crc kubenswrapper[4786]: I1209 09:56:36.271090 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfkl" event={"ID":"2351afc0-5e44-4794-93e6-613466be2524","Type":"ContainerDied","Data":"88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8"} Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.283747 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfkl" event={"ID":"2351afc0-5e44-4794-93e6-613466be2524","Type":"ContainerStarted","Data":"06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6"} Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.312239 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hbfkl" podStartSLOduration=1.8130239179999998 podStartE2EDuration="4.312215583s" podCreationTimestamp="2025-12-09 09:56:33 +0000 UTC" firstStartedPulling="2025-12-09 09:56:34.250990916 +0000 UTC m=+4360.134612142" lastFinishedPulling="2025-12-09 09:56:36.750182581 +0000 UTC m=+4362.633803807" observedRunningTime="2025-12-09 09:56:37.301509347 +0000 UTC m=+4363.185130583" watchObservedRunningTime="2025-12-09 09:56:37.312215583 +0000 UTC m=+4363.195836809" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.666491 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wlmxh"] Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.669127 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.680722 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlmxh"] Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.752065 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-utilities\") pod \"redhat-marketplace-wlmxh\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.752176 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-catalog-content\") pod \"redhat-marketplace-wlmxh\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.752303 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q6dg\" (UniqueName: \"kubernetes.io/projected/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-kube-api-access-7q6dg\") pod \"redhat-marketplace-wlmxh\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.854994 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-catalog-content\") pod \"redhat-marketplace-wlmxh\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.855120 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7q6dg\" (UniqueName: \"kubernetes.io/projected/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-kube-api-access-7q6dg\") pod \"redhat-marketplace-wlmxh\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.855316 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-utilities\") pod \"redhat-marketplace-wlmxh\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.855667 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-catalog-content\") pod \"redhat-marketplace-wlmxh\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.856140 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-utilities\") pod \"redhat-marketplace-wlmxh\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.876237 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7q6dg\" (UniqueName: \"kubernetes.io/projected/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-kube-api-access-7q6dg\") pod \"redhat-marketplace-wlmxh\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:37 crc kubenswrapper[4786]: I1209 09:56:37.992524 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:38 crc kubenswrapper[4786]: I1209 09:56:38.770766 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlmxh"] Dec 09 09:56:39 crc kubenswrapper[4786]: I1209 09:56:39.337712 4786 generic.go:334] "Generic (PLEG): container finished" podID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerID="344b68c39a7b397128c6957b6f16e371f81336f6467d1d6f7dcbd8621fe4d88f" exitCode=0 Dec 09 09:56:39 crc kubenswrapper[4786]: I1209 09:56:39.337778 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlmxh" event={"ID":"12e04c14-cfdb-4cbf-9113-2afd3bfe9035","Type":"ContainerDied","Data":"344b68c39a7b397128c6957b6f16e371f81336f6467d1d6f7dcbd8621fe4d88f"} Dec 09 09:56:39 crc kubenswrapper[4786]: I1209 09:56:39.338070 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlmxh" event={"ID":"12e04c14-cfdb-4cbf-9113-2afd3bfe9035","Type":"ContainerStarted","Data":"f6042e889a52c01071b8a47164e0f609cd037065345583a03f58adfd7637a68d"} Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.056629 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qgzvs"] Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.061075 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.074955 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgzvs"] Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.165935 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-catalog-content\") pod \"community-operators-qgzvs\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.166005 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-utilities\") pod \"community-operators-qgzvs\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.166750 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cngf9\" (UniqueName: \"kubernetes.io/projected/f076a36a-59c6-4f95-90c4-7a04f4e39840-kube-api-access-cngf9\") pod \"community-operators-qgzvs\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.268322 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cngf9\" (UniqueName: \"kubernetes.io/projected/f076a36a-59c6-4f95-90c4-7a04f4e39840-kube-api-access-cngf9\") pod \"community-operators-qgzvs\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.268454 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-catalog-content\") pod \"community-operators-qgzvs\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.268483 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-utilities\") pod \"community-operators-qgzvs\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.269103 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-catalog-content\") pod \"community-operators-qgzvs\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.269137 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-utilities\") pod \"community-operators-qgzvs\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.305822 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cngf9\" (UniqueName: \"kubernetes.io/projected/f076a36a-59c6-4f95-90c4-7a04f4e39840-kube-api-access-cngf9\") pod \"community-operators-qgzvs\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.353935 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlmxh" event={"ID":"12e04c14-cfdb-4cbf-9113-2afd3bfe9035","Type":"ContainerStarted","Data":"ffabbfd9be214e1cfadb7b7d06370b6634903dc470d2fc9c0312a48c1922ca49"} Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.399836 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:40 crc kubenswrapper[4786]: I1209 09:56:40.923807 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qgzvs"] Dec 09 09:56:41 crc kubenswrapper[4786]: I1209 09:56:41.364954 4786 generic.go:334] "Generic (PLEG): container finished" podID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerID="25a68e2c934966c0b8e868d30b97d1db79457a459645575ba5362aae2c8156ad" exitCode=0 Dec 09 09:56:41 crc kubenswrapper[4786]: I1209 09:56:41.365032 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgzvs" event={"ID":"f076a36a-59c6-4f95-90c4-7a04f4e39840","Type":"ContainerDied","Data":"25a68e2c934966c0b8e868d30b97d1db79457a459645575ba5362aae2c8156ad"} Dec 09 09:56:41 crc kubenswrapper[4786]: I1209 09:56:41.365064 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgzvs" event={"ID":"f076a36a-59c6-4f95-90c4-7a04f4e39840","Type":"ContainerStarted","Data":"46066de4dcc7eb0f1877cd2ff59f8050c8fd3048c436f131ce53d3a8c13d2e14"} Dec 09 09:56:41 crc kubenswrapper[4786]: I1209 09:56:41.370733 4786 generic.go:334] "Generic (PLEG): container finished" podID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerID="ffabbfd9be214e1cfadb7b7d06370b6634903dc470d2fc9c0312a48c1922ca49" exitCode=0 Dec 09 09:56:41 crc kubenswrapper[4786]: I1209 09:56:41.370802 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlmxh" event={"ID":"12e04c14-cfdb-4cbf-9113-2afd3bfe9035","Type":"ContainerDied","Data":"ffabbfd9be214e1cfadb7b7d06370b6634903dc470d2fc9c0312a48c1922ca49"} Dec 09 09:56:42 crc kubenswrapper[4786]: I1209 09:56:42.383928 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgzvs" event={"ID":"f076a36a-59c6-4f95-90c4-7a04f4e39840","Type":"ContainerStarted","Data":"ddad9ab41bfc614740127a821d7a1e27c4a39945afcf31c82cdd15c1d66e9654"} Dec 09 09:56:42 crc kubenswrapper[4786]: I1209 09:56:42.386697 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlmxh" event={"ID":"12e04c14-cfdb-4cbf-9113-2afd3bfe9035","Type":"ContainerStarted","Data":"ef6249a388b8085bc4c52fa1dbd88a4f4564e1c5e3b112694cfcf3e7e235a2f6"} Dec 09 09:56:42 crc kubenswrapper[4786]: I1209 09:56:42.432253 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wlmxh" podStartSLOduration=2.9697847040000003 podStartE2EDuration="5.432235563s" podCreationTimestamp="2025-12-09 09:56:37 +0000 UTC" firstStartedPulling="2025-12-09 09:56:39.339624958 +0000 UTC m=+4365.223246184" lastFinishedPulling="2025-12-09 09:56:41.802075817 +0000 UTC m=+4367.685697043" observedRunningTime="2025-12-09 09:56:42.427164142 +0000 UTC m=+4368.310785388" watchObservedRunningTime="2025-12-09 09:56:42.432235563 +0000 UTC m=+4368.315856789" Dec 09 09:56:43 crc kubenswrapper[4786]: I1209 09:56:43.444596 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:43 crc kubenswrapper[4786]: I1209 09:56:43.446140 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:43 crc kubenswrapper[4786]: I1209 09:56:43.540599 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:44 crc kubenswrapper[4786]: I1209 09:56:44.516929 4786 generic.go:334] "Generic (PLEG): container finished" podID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerID="ddad9ab41bfc614740127a821d7a1e27c4a39945afcf31c82cdd15c1d66e9654" exitCode=0 Dec 09 09:56:44 crc kubenswrapper[4786]: I1209 09:56:44.517045 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgzvs" event={"ID":"f076a36a-59c6-4f95-90c4-7a04f4e39840","Type":"ContainerDied","Data":"ddad9ab41bfc614740127a821d7a1e27c4a39945afcf31c82cdd15c1d66e9654"} Dec 09 09:56:44 crc kubenswrapper[4786]: I1209 09:56:44.574860 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:46 crc kubenswrapper[4786]: I1209 09:56:46.543618 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgzvs" event={"ID":"f076a36a-59c6-4f95-90c4-7a04f4e39840","Type":"ContainerStarted","Data":"8e28f6b87025b8e1f5ade597d0e9ee6e87f2b5d07242193301b15681f05ccad6"} Dec 09 09:56:46 crc kubenswrapper[4786]: I1209 09:56:46.574902 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qgzvs" podStartSLOduration=2.993091467 podStartE2EDuration="6.574885607s" podCreationTimestamp="2025-12-09 09:56:40 +0000 UTC" firstStartedPulling="2025-12-09 09:56:41.367286728 +0000 UTC m=+4367.250907954" lastFinishedPulling="2025-12-09 09:56:44.949080838 +0000 UTC m=+4370.832702094" observedRunningTime="2025-12-09 09:56:46.571989148 +0000 UTC m=+4372.455610374" watchObservedRunningTime="2025-12-09 09:56:46.574885607 +0000 UTC m=+4372.458506833" Dec 09 09:56:47 crc kubenswrapper[4786]: I1209 09:56:47.049528 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbfkl"] Dec 09 09:56:47 crc kubenswrapper[4786]: I1209 09:56:47.553200 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hbfkl" podUID="2351afc0-5e44-4794-93e6-613466be2524" containerName="registry-server" containerID="cri-o://06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6" gracePeriod=2 Dec 09 09:56:47 crc kubenswrapper[4786]: I1209 09:56:47.992730 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:47 crc kubenswrapper[4786]: I1209 09:56:47.993116 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.041265 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.080219 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.244666 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wm9b6\" (UniqueName: \"kubernetes.io/projected/2351afc0-5e44-4794-93e6-613466be2524-kube-api-access-wm9b6\") pod \"2351afc0-5e44-4794-93e6-613466be2524\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.244754 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-catalog-content\") pod \"2351afc0-5e44-4794-93e6-613466be2524\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.244841 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-utilities\") pod \"2351afc0-5e44-4794-93e6-613466be2524\" (UID: \"2351afc0-5e44-4794-93e6-613466be2524\") " Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.245986 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-utilities" (OuterVolumeSpecName: "utilities") pod "2351afc0-5e44-4794-93e6-613466be2524" (UID: "2351afc0-5e44-4794-93e6-613466be2524"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.250977 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2351afc0-5e44-4794-93e6-613466be2524-kube-api-access-wm9b6" (OuterVolumeSpecName: "kube-api-access-wm9b6") pod "2351afc0-5e44-4794-93e6-613466be2524" (UID: "2351afc0-5e44-4794-93e6-613466be2524"). InnerVolumeSpecName "kube-api-access-wm9b6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.302263 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2351afc0-5e44-4794-93e6-613466be2524" (UID: "2351afc0-5e44-4794-93e6-613466be2524"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.348023 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wm9b6\" (UniqueName: \"kubernetes.io/projected/2351afc0-5e44-4794-93e6-613466be2524-kube-api-access-wm9b6\") on node \"crc\" DevicePath \"\"" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.348083 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.348095 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2351afc0-5e44-4794-93e6-613466be2524-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.568261 4786 generic.go:334] "Generic (PLEG): container finished" podID="2351afc0-5e44-4794-93e6-613466be2524" containerID="06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6" exitCode=0 Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.568358 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hbfkl" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.568372 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfkl" event={"ID":"2351afc0-5e44-4794-93e6-613466be2524","Type":"ContainerDied","Data":"06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6"} Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.568468 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hbfkl" event={"ID":"2351afc0-5e44-4794-93e6-613466be2524","Type":"ContainerDied","Data":"7b3d4ec397457136212b873d83d8cae21e6346d95108888bb8b117786a185260"} Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.568495 4786 scope.go:117] "RemoveContainer" containerID="06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.609602 4786 scope.go:117] "RemoveContainer" containerID="88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.613281 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hbfkl"] Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.624748 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hbfkl"] Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.626708 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.643656 4786 scope.go:117] "RemoveContainer" containerID="e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.700096 4786 scope.go:117] "RemoveContainer" containerID="06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6" Dec 09 09:56:48 crc kubenswrapper[4786]: E1209 09:56:48.701102 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6\": container with ID starting with 06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6 not found: ID does not exist" containerID="06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.701139 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6"} err="failed to get container status \"06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6\": rpc error: code = NotFound desc = could not find container \"06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6\": container with ID starting with 06112a5badb1b726f7c50fd1cd1510068e9873dc0b632361fa7744f8849b31e6 not found: ID does not exist" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.701166 4786 scope.go:117] "RemoveContainer" containerID="88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8" Dec 09 09:56:48 crc kubenswrapper[4786]: E1209 09:56:48.702259 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8\": container with ID starting with 88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8 not found: ID does not exist" containerID="88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.702288 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8"} err="failed to get container status \"88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8\": rpc error: code = NotFound desc = could not find container \"88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8\": container with ID starting with 88c8b99f494a34820a1b35b07d812e4f08b9372b30289ebc58b67f1e9ecde8c8 not found: ID does not exist" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.702305 4786 scope.go:117] "RemoveContainer" containerID="e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00" Dec 09 09:56:48 crc kubenswrapper[4786]: E1209 09:56:48.706668 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00\": container with ID starting with e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00 not found: ID does not exist" containerID="e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00" Dec 09 09:56:48 crc kubenswrapper[4786]: I1209 09:56:48.706723 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00"} err="failed to get container status \"e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00\": rpc error: code = NotFound desc = could not find container \"e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00\": container with ID starting with e57dd6c42392c6d89e338b19c3f3a9caa7552fa1e04d5db59c85166f99ab9d00 not found: ID does not exist" Dec 09 09:56:49 crc kubenswrapper[4786]: I1209 09:56:49.207460 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2351afc0-5e44-4794-93e6-613466be2524" path="/var/lib/kubelet/pods/2351afc0-5e44-4794-93e6-613466be2524/volumes" Dec 09 09:56:50 crc kubenswrapper[4786]: I1209 09:56:50.520099 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:50 crc kubenswrapper[4786]: I1209 09:56:50.520474 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:50 crc kubenswrapper[4786]: I1209 09:56:50.578046 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:50 crc kubenswrapper[4786]: I1209 09:56:50.643935 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.465983 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlmxh"] Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.466489 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wlmxh" podUID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerName="registry-server" containerID="cri-o://ef6249a388b8085bc4c52fa1dbd88a4f4564e1c5e3b112694cfcf3e7e235a2f6" gracePeriod=2 Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.612579 4786 generic.go:334] "Generic (PLEG): container finished" podID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerID="ef6249a388b8085bc4c52fa1dbd88a4f4564e1c5e3b112694cfcf3e7e235a2f6" exitCode=0 Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.613694 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlmxh" event={"ID":"12e04c14-cfdb-4cbf-9113-2afd3bfe9035","Type":"ContainerDied","Data":"ef6249a388b8085bc4c52fa1dbd88a4f4564e1c5e3b112694cfcf3e7e235a2f6"} Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.937367 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.959126 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q6dg\" (UniqueName: \"kubernetes.io/projected/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-kube-api-access-7q6dg\") pod \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.959197 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-catalog-content\") pod \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.959240 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-utilities\") pod \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\" (UID: \"12e04c14-cfdb-4cbf-9113-2afd3bfe9035\") " Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.960570 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-utilities" (OuterVolumeSpecName: "utilities") pod "12e04c14-cfdb-4cbf-9113-2afd3bfe9035" (UID: "12e04c14-cfdb-4cbf-9113-2afd3bfe9035"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.968943 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-kube-api-access-7q6dg" (OuterVolumeSpecName: "kube-api-access-7q6dg") pod "12e04c14-cfdb-4cbf-9113-2afd3bfe9035" (UID: "12e04c14-cfdb-4cbf-9113-2afd3bfe9035"). InnerVolumeSpecName "kube-api-access-7q6dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:56:51 crc kubenswrapper[4786]: I1209 09:56:51.985291 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12e04c14-cfdb-4cbf-9113-2afd3bfe9035" (UID: "12e04c14-cfdb-4cbf-9113-2afd3bfe9035"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.061515 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7q6dg\" (UniqueName: \"kubernetes.io/projected/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-kube-api-access-7q6dg\") on node \"crc\" DevicePath \"\"" Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.061560 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.061575 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12e04c14-cfdb-4cbf-9113-2afd3bfe9035-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.634392 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wlmxh" event={"ID":"12e04c14-cfdb-4cbf-9113-2afd3bfe9035","Type":"ContainerDied","Data":"f6042e889a52c01071b8a47164e0f609cd037065345583a03f58adfd7637a68d"} Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.634545 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wlmxh" Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.634550 4786 scope.go:117] "RemoveContainer" containerID="ef6249a388b8085bc4c52fa1dbd88a4f4564e1c5e3b112694cfcf3e7e235a2f6" Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.660694 4786 scope.go:117] "RemoveContainer" containerID="ffabbfd9be214e1cfadb7b7d06370b6634903dc470d2fc9c0312a48c1922ca49" Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.682809 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlmxh"] Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.691722 4786 scope.go:117] "RemoveContainer" containerID="344b68c39a7b397128c6957b6f16e371f81336f6467d1d6f7dcbd8621fe4d88f" Dec 09 09:56:52 crc kubenswrapper[4786]: I1209 09:56:52.698824 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wlmxh"] Dec 09 09:56:53 crc kubenswrapper[4786]: I1209 09:56:53.200212 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" path="/var/lib/kubelet/pods/12e04c14-cfdb-4cbf-9113-2afd3bfe9035/volumes" Dec 09 09:56:53 crc kubenswrapper[4786]: I1209 09:56:53.844257 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgzvs"] Dec 09 09:56:53 crc kubenswrapper[4786]: I1209 09:56:53.844497 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qgzvs" podUID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerName="registry-server" containerID="cri-o://8e28f6b87025b8e1f5ade597d0e9ee6e87f2b5d07242193301b15681f05ccad6" gracePeriod=2 Dec 09 09:56:54 crc kubenswrapper[4786]: I1209 09:56:54.663338 4786 generic.go:334] "Generic (PLEG): container finished" podID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerID="8e28f6b87025b8e1f5ade597d0e9ee6e87f2b5d07242193301b15681f05ccad6" exitCode=0 Dec 09 09:56:54 crc kubenswrapper[4786]: I1209 09:56:54.663700 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgzvs" event={"ID":"f076a36a-59c6-4f95-90c4-7a04f4e39840","Type":"ContainerDied","Data":"8e28f6b87025b8e1f5ade597d0e9ee6e87f2b5d07242193301b15681f05ccad6"} Dec 09 09:56:54 crc kubenswrapper[4786]: I1209 09:56:54.890489 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:54 crc kubenswrapper[4786]: I1209 09:56:54.926484 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-catalog-content\") pod \"f076a36a-59c6-4f95-90c4-7a04f4e39840\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " Dec 09 09:56:54 crc kubenswrapper[4786]: I1209 09:56:54.926621 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-utilities\") pod \"f076a36a-59c6-4f95-90c4-7a04f4e39840\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " Dec 09 09:56:54 crc kubenswrapper[4786]: I1209 09:56:54.926717 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cngf9\" (UniqueName: \"kubernetes.io/projected/f076a36a-59c6-4f95-90c4-7a04f4e39840-kube-api-access-cngf9\") pod \"f076a36a-59c6-4f95-90c4-7a04f4e39840\" (UID: \"f076a36a-59c6-4f95-90c4-7a04f4e39840\") " Dec 09 09:56:54 crc kubenswrapper[4786]: I1209 09:56:54.927939 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-utilities" (OuterVolumeSpecName: "utilities") pod "f076a36a-59c6-4f95-90c4-7a04f4e39840" (UID: "f076a36a-59c6-4f95-90c4-7a04f4e39840"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:56:54 crc kubenswrapper[4786]: I1209 09:56:54.935718 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 09:56:54 crc kubenswrapper[4786]: I1209 09:56:54.968901 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f076a36a-59c6-4f95-90c4-7a04f4e39840-kube-api-access-cngf9" (OuterVolumeSpecName: "kube-api-access-cngf9") pod "f076a36a-59c6-4f95-90c4-7a04f4e39840" (UID: "f076a36a-59c6-4f95-90c4-7a04f4e39840"). InnerVolumeSpecName "kube-api-access-cngf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.022651 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f076a36a-59c6-4f95-90c4-7a04f4e39840" (UID: "f076a36a-59c6-4f95-90c4-7a04f4e39840"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.073971 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cngf9\" (UniqueName: \"kubernetes.io/projected/f076a36a-59c6-4f95-90c4-7a04f4e39840-kube-api-access-cngf9\") on node \"crc\" DevicePath \"\"" Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.074012 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f076a36a-59c6-4f95-90c4-7a04f4e39840-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.676519 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qgzvs" event={"ID":"f076a36a-59c6-4f95-90c4-7a04f4e39840","Type":"ContainerDied","Data":"46066de4dcc7eb0f1877cd2ff59f8050c8fd3048c436f131ce53d3a8c13d2e14"} Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.676608 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qgzvs" Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.676853 4786 scope.go:117] "RemoveContainer" containerID="8e28f6b87025b8e1f5ade597d0e9ee6e87f2b5d07242193301b15681f05ccad6" Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.701585 4786 scope.go:117] "RemoveContainer" containerID="ddad9ab41bfc614740127a821d7a1e27c4a39945afcf31c82cdd15c1d66e9654" Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.705403 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qgzvs"] Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.716461 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qgzvs"] Dec 09 09:56:55 crc kubenswrapper[4786]: I1209 09:56:55.734314 4786 scope.go:117] "RemoveContainer" containerID="25a68e2c934966c0b8e868d30b97d1db79457a459645575ba5362aae2c8156ad" Dec 09 09:56:57 crc kubenswrapper[4786]: I1209 09:56:57.201191 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f076a36a-59c6-4f95-90c4-7a04f4e39840" path="/var/lib/kubelet/pods/f076a36a-59c6-4f95-90c4-7a04f4e39840/volumes" Dec 09 09:57:05 crc kubenswrapper[4786]: E1209 09:57:04.999866 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice/crio-46066de4dcc7eb0f1877cd2ff59f8050c8fd3048c436f131ce53d3a8c13d2e14\": RecentStats: unable to find data in memory cache]" Dec 09 09:57:15 crc kubenswrapper[4786]: E1209 09:57:15.274513 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice/crio-46066de4dcc7eb0f1877cd2ff59f8050c8fd3048c436f131ce53d3a8c13d2e14\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice\": RecentStats: unable to find data in memory cache]" Dec 09 09:57:25 crc kubenswrapper[4786]: E1209 09:57:25.543923 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice/crio-46066de4dcc7eb0f1877cd2ff59f8050c8fd3048c436f131ce53d3a8c13d2e14\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice\": RecentStats: unable to find data in memory cache]" Dec 09 09:57:35 crc kubenswrapper[4786]: E1209 09:57:35.810274 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice/crio-46066de4dcc7eb0f1877cd2ff59f8050c8fd3048c436f131ce53d3a8c13d2e14\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice\": RecentStats: unable to find data in memory cache]" Dec 09 09:57:46 crc kubenswrapper[4786]: E1209 09:57:46.105068 4786 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf076a36a_59c6_4f95_90c4_7a04f4e39840.slice/crio-46066de4dcc7eb0f1877cd2ff59f8050c8fd3048c436f131ce53d3a8c13d2e14\": RecentStats: unable to find data in memory cache]" Dec 09 09:58:54 crc kubenswrapper[4786]: I1209 09:58:54.988975 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:58:54 crc kubenswrapper[4786]: I1209 09:58:54.989698 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:59:24 crc kubenswrapper[4786]: I1209 09:59:24.989664 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:59:24 crc kubenswrapper[4786]: I1209 09:59:24.990199 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:59:54 crc kubenswrapper[4786]: I1209 09:59:54.989566 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 09:59:54 crc kubenswrapper[4786]: I1209 09:59:54.990785 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 09:59:54 crc kubenswrapper[4786]: I1209 09:59:54.990860 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 09:59:54 crc kubenswrapper[4786]: I1209 09:59:54.991725 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 09:59:54 crc kubenswrapper[4786]: I1209 09:59:54.991785 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" gracePeriod=600 Dec 09 09:59:55 crc kubenswrapper[4786]: I1209 09:59:55.617611 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" exitCode=0 Dec 09 09:59:55 crc kubenswrapper[4786]: I1209 09:59:55.617668 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c"} Dec 09 09:59:55 crc kubenswrapper[4786]: I1209 09:59:55.617717 4786 scope.go:117] "RemoveContainer" containerID="082596eaad7fb08cbce4ef109bfafc4c289a07b317bbbd756a929b7a60a2dd34" Dec 09 09:59:55 crc kubenswrapper[4786]: E1209 09:59:55.709367 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:59:56 crc kubenswrapper[4786]: I1209 09:59:56.629583 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 09:59:56 crc kubenswrapper[4786]: E1209 09:59:56.630209 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.985225 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9qhnx"] Dec 09 09:59:58 crc kubenswrapper[4786]: E1209 09:59:58.986414 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2351afc0-5e44-4794-93e6-613466be2524" containerName="registry-server" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986453 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2351afc0-5e44-4794-93e6-613466be2524" containerName="registry-server" Dec 09 09:59:58 crc kubenswrapper[4786]: E1209 09:59:58.986474 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2351afc0-5e44-4794-93e6-613466be2524" containerName="extract-content" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986482 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2351afc0-5e44-4794-93e6-613466be2524" containerName="extract-content" Dec 09 09:59:58 crc kubenswrapper[4786]: E1209 09:59:58.986503 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerName="extract-utilities" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986511 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerName="extract-utilities" Dec 09 09:59:58 crc kubenswrapper[4786]: E1209 09:59:58.986524 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerName="extract-utilities" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986532 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerName="extract-utilities" Dec 09 09:59:58 crc kubenswrapper[4786]: E1209 09:59:58.986543 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2351afc0-5e44-4794-93e6-613466be2524" containerName="extract-utilities" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986550 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2351afc0-5e44-4794-93e6-613466be2524" containerName="extract-utilities" Dec 09 09:59:58 crc kubenswrapper[4786]: E1209 09:59:58.986564 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerName="extract-content" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986572 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerName="extract-content" Dec 09 09:59:58 crc kubenswrapper[4786]: E1209 09:59:58.986598 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerName="registry-server" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986605 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerName="registry-server" Dec 09 09:59:58 crc kubenswrapper[4786]: E1209 09:59:58.986622 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerName="extract-content" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986629 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerName="extract-content" Dec 09 09:59:58 crc kubenswrapper[4786]: E1209 09:59:58.986644 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerName="registry-server" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986650 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerName="registry-server" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986836 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="f076a36a-59c6-4f95-90c4-7a04f4e39840" containerName="registry-server" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986851 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2351afc0-5e44-4794-93e6-613466be2524" containerName="registry-server" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.986874 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="12e04c14-cfdb-4cbf-9113-2afd3bfe9035" containerName="registry-server" Dec 09 09:59:58 crc kubenswrapper[4786]: I1209 09:59:58.988580 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.000799 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9qhnx"] Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.119646 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5g29\" (UniqueName: \"kubernetes.io/projected/95a056cc-d897-42e7-9f6c-4c55d6ef641e-kube-api-access-n5g29\") pod \"redhat-operators-9qhnx\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.119857 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-catalog-content\") pod \"redhat-operators-9qhnx\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.119900 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-utilities\") pod \"redhat-operators-9qhnx\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.224243 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5g29\" (UniqueName: \"kubernetes.io/projected/95a056cc-d897-42e7-9f6c-4c55d6ef641e-kube-api-access-n5g29\") pod \"redhat-operators-9qhnx\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.224545 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-catalog-content\") pod \"redhat-operators-9qhnx\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.224594 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-utilities\") pod \"redhat-operators-9qhnx\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.225208 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-utilities\") pod \"redhat-operators-9qhnx\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.225507 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-catalog-content\") pod \"redhat-operators-9qhnx\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.251363 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5g29\" (UniqueName: \"kubernetes.io/projected/95a056cc-d897-42e7-9f6c-4c55d6ef641e-kube-api-access-n5g29\") pod \"redhat-operators-9qhnx\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.323954 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 09:59:59 crc kubenswrapper[4786]: I1209 09:59:59.841652 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9qhnx"] Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.190245 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf"] Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.192446 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.195732 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.196060 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.203042 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf"] Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.354008 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95fdj\" (UniqueName: \"kubernetes.io/projected/3ee5825e-2347-443a-b680-2108a09c663a-kube-api-access-95fdj\") pod \"collect-profiles-29421240-tzlqf\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.354526 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ee5825e-2347-443a-b680-2108a09c663a-secret-volume\") pod \"collect-profiles-29421240-tzlqf\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.354558 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ee5825e-2347-443a-b680-2108a09c663a-config-volume\") pod \"collect-profiles-29421240-tzlqf\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.456003 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ee5825e-2347-443a-b680-2108a09c663a-secret-volume\") pod \"collect-profiles-29421240-tzlqf\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.456052 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ee5825e-2347-443a-b680-2108a09c663a-config-volume\") pod \"collect-profiles-29421240-tzlqf\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.456186 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95fdj\" (UniqueName: \"kubernetes.io/projected/3ee5825e-2347-443a-b680-2108a09c663a-kube-api-access-95fdj\") pod \"collect-profiles-29421240-tzlqf\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.457173 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ee5825e-2347-443a-b680-2108a09c663a-config-volume\") pod \"collect-profiles-29421240-tzlqf\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.468844 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ee5825e-2347-443a-b680-2108a09c663a-secret-volume\") pod \"collect-profiles-29421240-tzlqf\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.470755 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95fdj\" (UniqueName: \"kubernetes.io/projected/3ee5825e-2347-443a-b680-2108a09c663a-kube-api-access-95fdj\") pod \"collect-profiles-29421240-tzlqf\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.515991 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.704320 4786 generic.go:334] "Generic (PLEG): container finished" podID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerID="9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e" exitCode=0 Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.704549 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qhnx" event={"ID":"95a056cc-d897-42e7-9f6c-4c55d6ef641e","Type":"ContainerDied","Data":"9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e"} Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.704713 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qhnx" event={"ID":"95a056cc-d897-42e7-9f6c-4c55d6ef641e","Type":"ContainerStarted","Data":"3f37583e36258559eba9b88bbff9f137f0a9bff84f355b83284af61657f71a4b"} Dec 09 10:00:00 crc kubenswrapper[4786]: I1209 10:00:00.953136 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf"] Dec 09 10:00:00 crc kubenswrapper[4786]: W1209 10:00:00.955382 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ee5825e_2347_443a_b680_2108a09c663a.slice/crio-c4f64c1a72c84593334aa133dc969ba8f2db34310238c0675b1f27e5e4846beb WatchSource:0}: Error finding container c4f64c1a72c84593334aa133dc969ba8f2db34310238c0675b1f27e5e4846beb: Status 404 returned error can't find the container with id c4f64c1a72c84593334aa133dc969ba8f2db34310238c0675b1f27e5e4846beb Dec 09 10:00:01 crc kubenswrapper[4786]: I1209 10:00:01.715556 4786 generic.go:334] "Generic (PLEG): container finished" podID="3ee5825e-2347-443a-b680-2108a09c663a" containerID="0b4b8c8eae5c4ccf82680a098e86b7076acfe4342647282b6d3f6479f01aa2df" exitCode=0 Dec 09 10:00:01 crc kubenswrapper[4786]: I1209 10:00:01.715651 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" event={"ID":"3ee5825e-2347-443a-b680-2108a09c663a","Type":"ContainerDied","Data":"0b4b8c8eae5c4ccf82680a098e86b7076acfe4342647282b6d3f6479f01aa2df"} Dec 09 10:00:01 crc kubenswrapper[4786]: I1209 10:00:01.716053 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" event={"ID":"3ee5825e-2347-443a-b680-2108a09c663a","Type":"ContainerStarted","Data":"c4f64c1a72c84593334aa133dc969ba8f2db34310238c0675b1f27e5e4846beb"} Dec 09 10:00:02 crc kubenswrapper[4786]: I1209 10:00:02.726708 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qhnx" event={"ID":"95a056cc-d897-42e7-9f6c-4c55d6ef641e","Type":"ContainerStarted","Data":"bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607"} Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.111321 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.221198 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ee5825e-2347-443a-b680-2108a09c663a-config-volume\") pod \"3ee5825e-2347-443a-b680-2108a09c663a\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.221343 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95fdj\" (UniqueName: \"kubernetes.io/projected/3ee5825e-2347-443a-b680-2108a09c663a-kube-api-access-95fdj\") pod \"3ee5825e-2347-443a-b680-2108a09c663a\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.221390 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ee5825e-2347-443a-b680-2108a09c663a-secret-volume\") pod \"3ee5825e-2347-443a-b680-2108a09c663a\" (UID: \"3ee5825e-2347-443a-b680-2108a09c663a\") " Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.222143 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ee5825e-2347-443a-b680-2108a09c663a-config-volume" (OuterVolumeSpecName: "config-volume") pod "3ee5825e-2347-443a-b680-2108a09c663a" (UID: "3ee5825e-2347-443a-b680-2108a09c663a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.324136 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ee5825e-2347-443a-b680-2108a09c663a-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.739009 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.739016 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421240-tzlqf" event={"ID":"3ee5825e-2347-443a-b680-2108a09c663a","Type":"ContainerDied","Data":"c4f64c1a72c84593334aa133dc969ba8f2db34310238c0675b1f27e5e4846beb"} Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.739083 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4f64c1a72c84593334aa133dc969ba8f2db34310238c0675b1f27e5e4846beb" Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.887724 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ee5825e-2347-443a-b680-2108a09c663a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3ee5825e-2347-443a-b680-2108a09c663a" (UID: "3ee5825e-2347-443a-b680-2108a09c663a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.889187 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ee5825e-2347-443a-b680-2108a09c663a-kube-api-access-95fdj" (OuterVolumeSpecName: "kube-api-access-95fdj") pod "3ee5825e-2347-443a-b680-2108a09c663a" (UID: "3ee5825e-2347-443a-b680-2108a09c663a"). InnerVolumeSpecName "kube-api-access-95fdj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.941027 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95fdj\" (UniqueName: \"kubernetes.io/projected/3ee5825e-2347-443a-b680-2108a09c663a-kube-api-access-95fdj\") on node \"crc\" DevicePath \"\"" Dec 09 10:00:03 crc kubenswrapper[4786]: I1209 10:00:03.941083 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ee5825e-2347-443a-b680-2108a09c663a-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 10:00:04 crc kubenswrapper[4786]: I1209 10:00:04.191447 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm"] Dec 09 10:00:04 crc kubenswrapper[4786]: I1209 10:00:04.202386 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421195-pjqqm"] Dec 09 10:00:05 crc kubenswrapper[4786]: I1209 10:00:05.201736 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af2591ab-76a1-4771-b492-51ecf18d10c2" path="/var/lib/kubelet/pods/af2591ab-76a1-4771-b492-51ecf18d10c2/volumes" Dec 09 10:00:05 crc kubenswrapper[4786]: I1209 10:00:05.758878 4786 generic.go:334] "Generic (PLEG): container finished" podID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerID="bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607" exitCode=0 Dec 09 10:00:05 crc kubenswrapper[4786]: I1209 10:00:05.758947 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qhnx" event={"ID":"95a056cc-d897-42e7-9f6c-4c55d6ef641e","Type":"ContainerDied","Data":"bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607"} Dec 09 10:00:05 crc kubenswrapper[4786]: I1209 10:00:05.852265 4786 scope.go:117] "RemoveContainer" containerID="906d507b4881a7780cae22e6a19ee0376383c551845c372f5b7b073852803214" Dec 09 10:00:06 crc kubenswrapper[4786]: I1209 10:00:06.773518 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qhnx" event={"ID":"95a056cc-d897-42e7-9f6c-4c55d6ef641e","Type":"ContainerStarted","Data":"b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a"} Dec 09 10:00:06 crc kubenswrapper[4786]: I1209 10:00:06.790623 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9qhnx" podStartSLOduration=3.285433654 podStartE2EDuration="8.790604479s" podCreationTimestamp="2025-12-09 09:59:58 +0000 UTC" firstStartedPulling="2025-12-09 10:00:00.707046233 +0000 UTC m=+4566.590667459" lastFinishedPulling="2025-12-09 10:00:06.212217058 +0000 UTC m=+4572.095838284" observedRunningTime="2025-12-09 10:00:06.789681068 +0000 UTC m=+4572.673302294" watchObservedRunningTime="2025-12-09 10:00:06.790604479 +0000 UTC m=+4572.674225705" Dec 09 10:00:08 crc kubenswrapper[4786]: I1209 10:00:08.188578 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:00:08 crc kubenswrapper[4786]: E1209 10:00:08.190243 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:00:09 crc kubenswrapper[4786]: I1209 10:00:09.324135 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 10:00:09 crc kubenswrapper[4786]: I1209 10:00:09.324197 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 10:00:10 crc kubenswrapper[4786]: I1209 10:00:10.369188 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9qhnx" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerName="registry-server" probeResult="failure" output=< Dec 09 10:00:10 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 10:00:10 crc kubenswrapper[4786]: > Dec 09 10:00:19 crc kubenswrapper[4786]: I1209 10:00:19.385558 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 10:00:19 crc kubenswrapper[4786]: I1209 10:00:19.449321 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 10:00:19 crc kubenswrapper[4786]: I1209 10:00:19.631228 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9qhnx"] Dec 09 10:00:20 crc kubenswrapper[4786]: I1209 10:00:20.969232 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9qhnx" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerName="registry-server" containerID="cri-o://b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a" gracePeriod=2 Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.511341 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.581217 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-catalog-content\") pod \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.581328 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-utilities\") pod \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.581436 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5g29\" (UniqueName: \"kubernetes.io/projected/95a056cc-d897-42e7-9f6c-4c55d6ef641e-kube-api-access-n5g29\") pod \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\" (UID: \"95a056cc-d897-42e7-9f6c-4c55d6ef641e\") " Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.582016 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-utilities" (OuterVolumeSpecName: "utilities") pod "95a056cc-d897-42e7-9f6c-4c55d6ef641e" (UID: "95a056cc-d897-42e7-9f6c-4c55d6ef641e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.596448 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95a056cc-d897-42e7-9f6c-4c55d6ef641e-kube-api-access-n5g29" (OuterVolumeSpecName: "kube-api-access-n5g29") pod "95a056cc-d897-42e7-9f6c-4c55d6ef641e" (UID: "95a056cc-d897-42e7-9f6c-4c55d6ef641e"). InnerVolumeSpecName "kube-api-access-n5g29". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.684111 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.684149 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5g29\" (UniqueName: \"kubernetes.io/projected/95a056cc-d897-42e7-9f6c-4c55d6ef641e-kube-api-access-n5g29\") on node \"crc\" DevicePath \"\"" Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.685887 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95a056cc-d897-42e7-9f6c-4c55d6ef641e" (UID: "95a056cc-d897-42e7-9f6c-4c55d6ef641e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.785590 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95a056cc-d897-42e7-9f6c-4c55d6ef641e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.981313 4786 generic.go:334] "Generic (PLEG): container finished" podID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerID="b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a" exitCode=0 Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.981399 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9qhnx" Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.981478 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qhnx" event={"ID":"95a056cc-d897-42e7-9f6c-4c55d6ef641e","Type":"ContainerDied","Data":"b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a"} Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.982500 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9qhnx" event={"ID":"95a056cc-d897-42e7-9f6c-4c55d6ef641e","Type":"ContainerDied","Data":"3f37583e36258559eba9b88bbff9f137f0a9bff84f355b83284af61657f71a4b"} Dec 09 10:00:21 crc kubenswrapper[4786]: I1209 10:00:21.982536 4786 scope.go:117] "RemoveContainer" containerID="b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a" Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.010889 4786 scope.go:117] "RemoveContainer" containerID="bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607" Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.026615 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9qhnx"] Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.034287 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9qhnx"] Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.041345 4786 scope.go:117] "RemoveContainer" containerID="9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e" Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.086535 4786 scope.go:117] "RemoveContainer" containerID="b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a" Dec 09 10:00:22 crc kubenswrapper[4786]: E1209 10:00:22.086899 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a\": container with ID starting with b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a not found: ID does not exist" containerID="b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a" Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.086931 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a"} err="failed to get container status \"b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a\": rpc error: code = NotFound desc = could not find container \"b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a\": container with ID starting with b84306d36d6a298e9bfe99e2c10ed473d35d8c4fffb1118f2679b1bee28fe96a not found: ID does not exist" Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.086950 4786 scope.go:117] "RemoveContainer" containerID="bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607" Dec 09 10:00:22 crc kubenswrapper[4786]: E1209 10:00:22.087164 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607\": container with ID starting with bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607 not found: ID does not exist" containerID="bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607" Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.087181 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607"} err="failed to get container status \"bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607\": rpc error: code = NotFound desc = could not find container \"bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607\": container with ID starting with bc8478e08d7ba90967820514c6642d05d5c1aab6acfb5e5aa65af306db86c607 not found: ID does not exist" Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.087193 4786 scope.go:117] "RemoveContainer" containerID="9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e" Dec 09 10:00:22 crc kubenswrapper[4786]: E1209 10:00:22.087534 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e\": container with ID starting with 9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e not found: ID does not exist" containerID="9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e" Dec 09 10:00:22 crc kubenswrapper[4786]: I1209 10:00:22.087553 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e"} err="failed to get container status \"9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e\": rpc error: code = NotFound desc = could not find container \"9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e\": container with ID starting with 9dc9bcecfb0e90a4b200ad8e8aeaebfb18d48f61ea58f9bfc88e91c49da6d77e not found: ID does not exist" Dec 09 10:00:23 crc kubenswrapper[4786]: I1209 10:00:23.188938 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:00:23 crc kubenswrapper[4786]: E1209 10:00:23.189849 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:00:23 crc kubenswrapper[4786]: I1209 10:00:23.201000 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" path="/var/lib/kubelet/pods/95a056cc-d897-42e7-9f6c-4c55d6ef641e/volumes" Dec 09 10:00:38 crc kubenswrapper[4786]: I1209 10:00:38.189096 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:00:38 crc kubenswrapper[4786]: E1209 10:00:38.189949 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:00:51 crc kubenswrapper[4786]: I1209 10:00:51.188725 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:00:51 crc kubenswrapper[4786]: E1209 10:00:51.189849 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.152072 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29421241-g22g6"] Dec 09 10:01:00 crc kubenswrapper[4786]: E1209 10:01:00.153647 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerName="extract-content" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.153673 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerName="extract-content" Dec 09 10:01:00 crc kubenswrapper[4786]: E1209 10:01:00.153702 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerName="extract-utilities" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.153710 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerName="extract-utilities" Dec 09 10:01:00 crc kubenswrapper[4786]: E1209 10:01:00.153736 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerName="registry-server" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.153745 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerName="registry-server" Dec 09 10:01:00 crc kubenswrapper[4786]: E1209 10:01:00.153774 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ee5825e-2347-443a-b680-2108a09c663a" containerName="collect-profiles" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.153781 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ee5825e-2347-443a-b680-2108a09c663a" containerName="collect-profiles" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.154028 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="95a056cc-d897-42e7-9f6c-4c55d6ef641e" containerName="registry-server" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.154228 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ee5825e-2347-443a-b680-2108a09c663a" containerName="collect-profiles" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.155529 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.171065 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421241-g22g6"] Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.189103 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-combined-ca-bundle\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.189172 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-config-data\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.189331 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvgnw\" (UniqueName: \"kubernetes.io/projected/16354109-b784-40f5-b196-1f1972f99264-kube-api-access-cvgnw\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.189409 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-fernet-keys\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.290803 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-combined-ca-bundle\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.291056 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-config-data\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.291820 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvgnw\" (UniqueName: \"kubernetes.io/projected/16354109-b784-40f5-b196-1f1972f99264-kube-api-access-cvgnw\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.292103 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-fernet-keys\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.303725 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-combined-ca-bundle\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.303909 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-fernet-keys\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.305578 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-config-data\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.314669 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvgnw\" (UniqueName: \"kubernetes.io/projected/16354109-b784-40f5-b196-1f1972f99264-kube-api-access-cvgnw\") pod \"keystone-cron-29421241-g22g6\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.477255 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:00 crc kubenswrapper[4786]: I1209 10:01:00.962939 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29421241-g22g6"] Dec 09 10:01:02 crc kubenswrapper[4786]: I1209 10:01:02.421221 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421241-g22g6" event={"ID":"16354109-b784-40f5-b196-1f1972f99264","Type":"ContainerStarted","Data":"59a8842bd0231443dc0da97918ae2e84634de5699d6938eacf697adbd9eec5d1"} Dec 09 10:01:02 crc kubenswrapper[4786]: I1209 10:01:02.421864 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421241-g22g6" event={"ID":"16354109-b784-40f5-b196-1f1972f99264","Type":"ContainerStarted","Data":"b324208d8141109078f059e3d8896bd08d8f7943a5c15f5cdb94514a8157fcb3"} Dec 09 10:01:02 crc kubenswrapper[4786]: I1209 10:01:02.442412 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29421241-g22g6" podStartSLOduration=2.442390033 podStartE2EDuration="2.442390033s" podCreationTimestamp="2025-12-09 10:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 10:01:02.439717869 +0000 UTC m=+4628.323339105" watchObservedRunningTime="2025-12-09 10:01:02.442390033 +0000 UTC m=+4628.326011259" Dec 09 10:01:04 crc kubenswrapper[4786]: I1209 10:01:04.189270 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:01:04 crc kubenswrapper[4786]: E1209 10:01:04.190598 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:01:05 crc kubenswrapper[4786]: I1209 10:01:05.450611 4786 generic.go:334] "Generic (PLEG): container finished" podID="16354109-b784-40f5-b196-1f1972f99264" containerID="59a8842bd0231443dc0da97918ae2e84634de5699d6938eacf697adbd9eec5d1" exitCode=0 Dec 09 10:01:05 crc kubenswrapper[4786]: I1209 10:01:05.450665 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421241-g22g6" event={"ID":"16354109-b784-40f5-b196-1f1972f99264","Type":"ContainerDied","Data":"59a8842bd0231443dc0da97918ae2e84634de5699d6938eacf697adbd9eec5d1"} Dec 09 10:01:06 crc kubenswrapper[4786]: I1209 10:01:06.847523 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:06 crc kubenswrapper[4786]: I1209 10:01:06.952754 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-fernet-keys\") pod \"16354109-b784-40f5-b196-1f1972f99264\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " Dec 09 10:01:06 crc kubenswrapper[4786]: I1209 10:01:06.952910 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-combined-ca-bundle\") pod \"16354109-b784-40f5-b196-1f1972f99264\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " Dec 09 10:01:06 crc kubenswrapper[4786]: I1209 10:01:06.953009 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-config-data\") pod \"16354109-b784-40f5-b196-1f1972f99264\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " Dec 09 10:01:06 crc kubenswrapper[4786]: I1209 10:01:06.953128 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvgnw\" (UniqueName: \"kubernetes.io/projected/16354109-b784-40f5-b196-1f1972f99264-kube-api-access-cvgnw\") pod \"16354109-b784-40f5-b196-1f1972f99264\" (UID: \"16354109-b784-40f5-b196-1f1972f99264\") " Dec 09 10:01:06 crc kubenswrapper[4786]: I1209 10:01:06.961885 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "16354109-b784-40f5-b196-1f1972f99264" (UID: "16354109-b784-40f5-b196-1f1972f99264"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 10:01:06 crc kubenswrapper[4786]: I1209 10:01:06.963608 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16354109-b784-40f5-b196-1f1972f99264-kube-api-access-cvgnw" (OuterVolumeSpecName: "kube-api-access-cvgnw") pod "16354109-b784-40f5-b196-1f1972f99264" (UID: "16354109-b784-40f5-b196-1f1972f99264"). InnerVolumeSpecName "kube-api-access-cvgnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:01:06 crc kubenswrapper[4786]: I1209 10:01:06.987084 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "16354109-b784-40f5-b196-1f1972f99264" (UID: "16354109-b784-40f5-b196-1f1972f99264"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 10:01:07 crc kubenswrapper[4786]: I1209 10:01:07.030304 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-config-data" (OuterVolumeSpecName: "config-data") pod "16354109-b784-40f5-b196-1f1972f99264" (UID: "16354109-b784-40f5-b196-1f1972f99264"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 10:01:07 crc kubenswrapper[4786]: I1209 10:01:07.056252 4786 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 09 10:01:07 crc kubenswrapper[4786]: I1209 10:01:07.056284 4786 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 09 10:01:07 crc kubenswrapper[4786]: I1209 10:01:07.056295 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/16354109-b784-40f5-b196-1f1972f99264-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 10:01:07 crc kubenswrapper[4786]: I1209 10:01:07.056303 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvgnw\" (UniqueName: \"kubernetes.io/projected/16354109-b784-40f5-b196-1f1972f99264-kube-api-access-cvgnw\") on node \"crc\" DevicePath \"\"" Dec 09 10:01:07 crc kubenswrapper[4786]: I1209 10:01:07.475392 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29421241-g22g6" event={"ID":"16354109-b784-40f5-b196-1f1972f99264","Type":"ContainerDied","Data":"b324208d8141109078f059e3d8896bd08d8f7943a5c15f5cdb94514a8157fcb3"} Dec 09 10:01:07 crc kubenswrapper[4786]: I1209 10:01:07.475474 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b324208d8141109078f059e3d8896bd08d8f7943a5c15f5cdb94514a8157fcb3" Dec 09 10:01:07 crc kubenswrapper[4786]: I1209 10:01:07.475875 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29421241-g22g6" Dec 09 10:01:17 crc kubenswrapper[4786]: I1209 10:01:17.189070 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:01:17 crc kubenswrapper[4786]: E1209 10:01:17.189867 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:01:31 crc kubenswrapper[4786]: I1209 10:01:31.188232 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:01:31 crc kubenswrapper[4786]: E1209 10:01:31.189017 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:01:43 crc kubenswrapper[4786]: I1209 10:01:43.554984 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:01:43 crc kubenswrapper[4786]: E1209 10:01:43.555938 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:01:55 crc kubenswrapper[4786]: I1209 10:01:55.194889 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:01:55 crc kubenswrapper[4786]: E1209 10:01:55.195758 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:02:09 crc kubenswrapper[4786]: I1209 10:02:09.188075 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:02:09 crc kubenswrapper[4786]: E1209 10:02:09.188858 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:02:23 crc kubenswrapper[4786]: I1209 10:02:23.189696 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:02:23 crc kubenswrapper[4786]: E1209 10:02:23.191087 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:02:34 crc kubenswrapper[4786]: I1209 10:02:34.188005 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:02:34 crc kubenswrapper[4786]: E1209 10:02:34.188874 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:02:49 crc kubenswrapper[4786]: I1209 10:02:49.188938 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:02:49 crc kubenswrapper[4786]: E1209 10:02:49.189865 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:03:00 crc kubenswrapper[4786]: I1209 10:03:00.188223 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:03:00 crc kubenswrapper[4786]: E1209 10:03:00.189145 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:03:15 crc kubenswrapper[4786]: I1209 10:03:15.196136 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:03:15 crc kubenswrapper[4786]: E1209 10:03:15.197292 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:03:29 crc kubenswrapper[4786]: I1209 10:03:29.188579 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:03:29 crc kubenswrapper[4786]: E1209 10:03:29.189708 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:03:40 crc kubenswrapper[4786]: I1209 10:03:40.188341 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:03:40 crc kubenswrapper[4786]: E1209 10:03:40.189393 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:03:51 crc kubenswrapper[4786]: I1209 10:03:51.188407 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:03:51 crc kubenswrapper[4786]: E1209 10:03:51.189485 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:04:02 crc kubenswrapper[4786]: I1209 10:04:02.189291 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:04:02 crc kubenswrapper[4786]: E1209 10:04:02.190587 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:04:16 crc kubenswrapper[4786]: I1209 10:04:16.188666 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:04:16 crc kubenswrapper[4786]: E1209 10:04:16.189682 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:04:27 crc kubenswrapper[4786]: I1209 10:04:27.189151 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:04:27 crc kubenswrapper[4786]: E1209 10:04:27.189929 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:04:39 crc kubenswrapper[4786]: I1209 10:04:39.188740 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:04:39 crc kubenswrapper[4786]: E1209 10:04:39.189747 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:04:52 crc kubenswrapper[4786]: I1209 10:04:52.188144 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:04:52 crc kubenswrapper[4786]: E1209 10:04:52.190687 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:05:07 crc kubenswrapper[4786]: I1209 10:05:07.188845 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:05:08 crc kubenswrapper[4786]: I1209 10:05:08.393719 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"870f681940df81ca036d7205c4724f03002e04f58bad8754206c5f32463b67b9"} Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.028038 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6dhwz"] Dec 09 10:07:04 crc kubenswrapper[4786]: E1209 10:07:04.029284 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16354109-b784-40f5-b196-1f1972f99264" containerName="keystone-cron" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.029302 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="16354109-b784-40f5-b196-1f1972f99264" containerName="keystone-cron" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.029607 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="16354109-b784-40f5-b196-1f1972f99264" containerName="keystone-cron" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.031378 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.046706 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dhwz"] Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.145169 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-utilities\") pod \"redhat-marketplace-6dhwz\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.145469 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zqtz\" (UniqueName: \"kubernetes.io/projected/67f7886f-a7b6-4f8d-830b-91194c88490b-kube-api-access-5zqtz\") pod \"redhat-marketplace-6dhwz\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.145801 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-catalog-content\") pod \"redhat-marketplace-6dhwz\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.247475 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-catalog-content\") pod \"redhat-marketplace-6dhwz\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.247879 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-utilities\") pod \"redhat-marketplace-6dhwz\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.248058 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zqtz\" (UniqueName: \"kubernetes.io/projected/67f7886f-a7b6-4f8d-830b-91194c88490b-kube-api-access-5zqtz\") pod \"redhat-marketplace-6dhwz\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.248090 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-catalog-content\") pod \"redhat-marketplace-6dhwz\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.248495 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-utilities\") pod \"redhat-marketplace-6dhwz\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.270458 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zqtz\" (UniqueName: \"kubernetes.io/projected/67f7886f-a7b6-4f8d-830b-91194c88490b-kube-api-access-5zqtz\") pod \"redhat-marketplace-6dhwz\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.354978 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:04 crc kubenswrapper[4786]: I1209 10:07:04.883114 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dhwz"] Dec 09 10:07:05 crc kubenswrapper[4786]: I1209 10:07:05.605503 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dhwz" event={"ID":"67f7886f-a7b6-4f8d-830b-91194c88490b","Type":"ContainerStarted","Data":"079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a"} Dec 09 10:07:05 crc kubenswrapper[4786]: I1209 10:07:05.605812 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dhwz" event={"ID":"67f7886f-a7b6-4f8d-830b-91194c88490b","Type":"ContainerStarted","Data":"6c597916448fe2c28d537275ca4114d7d2848489898faea1188477336072b89c"} Dec 09 10:07:06 crc kubenswrapper[4786]: I1209 10:07:06.618062 4786 generic.go:334] "Generic (PLEG): container finished" podID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerID="079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a" exitCode=0 Dec 09 10:07:06 crc kubenswrapper[4786]: I1209 10:07:06.618117 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dhwz" event={"ID":"67f7886f-a7b6-4f8d-830b-91194c88490b","Type":"ContainerDied","Data":"079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a"} Dec 09 10:07:06 crc kubenswrapper[4786]: I1209 10:07:06.620546 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 10:07:07 crc kubenswrapper[4786]: I1209 10:07:07.634563 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dhwz" event={"ID":"67f7886f-a7b6-4f8d-830b-91194c88490b","Type":"ContainerStarted","Data":"22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19"} Dec 09 10:07:08 crc kubenswrapper[4786]: I1209 10:07:08.646731 4786 generic.go:334] "Generic (PLEG): container finished" podID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerID="22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19" exitCode=0 Dec 09 10:07:08 crc kubenswrapper[4786]: I1209 10:07:08.646858 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dhwz" event={"ID":"67f7886f-a7b6-4f8d-830b-91194c88490b","Type":"ContainerDied","Data":"22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19"} Dec 09 10:07:09 crc kubenswrapper[4786]: I1209 10:07:09.660482 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dhwz" event={"ID":"67f7886f-a7b6-4f8d-830b-91194c88490b","Type":"ContainerStarted","Data":"88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a"} Dec 09 10:07:09 crc kubenswrapper[4786]: I1209 10:07:09.693200 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6dhwz" podStartSLOduration=3.234613519 podStartE2EDuration="5.693167423s" podCreationTimestamp="2025-12-09 10:07:04 +0000 UTC" firstStartedPulling="2025-12-09 10:07:06.620273194 +0000 UTC m=+4992.503894420" lastFinishedPulling="2025-12-09 10:07:09.078827098 +0000 UTC m=+4994.962448324" observedRunningTime="2025-12-09 10:07:09.679798847 +0000 UTC m=+4995.563420073" watchObservedRunningTime="2025-12-09 10:07:09.693167423 +0000 UTC m=+4995.576788649" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.011889 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-b5rz9"] Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.037501 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.044211 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-catalog-content\") pod \"certified-operators-b5rz9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.069780 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-utilities\") pod \"certified-operators-b5rz9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.070405 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gslfr\" (UniqueName: \"kubernetes.io/projected/3d29156a-544c-41d1-b0b6-d56db23c64a9-kube-api-access-gslfr\") pod \"certified-operators-b5rz9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.093995 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5rz9"] Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.178825 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gslfr\" (UniqueName: \"kubernetes.io/projected/3d29156a-544c-41d1-b0b6-d56db23c64a9-kube-api-access-gslfr\") pod \"certified-operators-b5rz9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.179131 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-catalog-content\") pod \"certified-operators-b5rz9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.179166 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-utilities\") pod \"certified-operators-b5rz9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.179864 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-utilities\") pod \"certified-operators-b5rz9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.180065 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-catalog-content\") pod \"certified-operators-b5rz9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.224440 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gslfr\" (UniqueName: \"kubernetes.io/projected/3d29156a-544c-41d1-b0b6-d56db23c64a9-kube-api-access-gslfr\") pod \"certified-operators-b5rz9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:11 crc kubenswrapper[4786]: I1209 10:07:11.395131 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:12 crc kubenswrapper[4786]: I1209 10:07:12.053616 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-b5rz9"] Dec 09 10:07:12 crc kubenswrapper[4786]: W1209 10:07:12.063982 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d29156a_544c_41d1_b0b6_d56db23c64a9.slice/crio-004907acbdf476578821911a00653ac50efbfad5ca9b9d1c2101cd90cb15e802 WatchSource:0}: Error finding container 004907acbdf476578821911a00653ac50efbfad5ca9b9d1c2101cd90cb15e802: Status 404 returned error can't find the container with id 004907acbdf476578821911a00653ac50efbfad5ca9b9d1c2101cd90cb15e802 Dec 09 10:07:12 crc kubenswrapper[4786]: I1209 10:07:12.715061 4786 generic.go:334] "Generic (PLEG): container finished" podID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerID="581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75" exitCode=0 Dec 09 10:07:12 crc kubenswrapper[4786]: I1209 10:07:12.715248 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5rz9" event={"ID":"3d29156a-544c-41d1-b0b6-d56db23c64a9","Type":"ContainerDied","Data":"581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75"} Dec 09 10:07:12 crc kubenswrapper[4786]: I1209 10:07:12.715571 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5rz9" event={"ID":"3d29156a-544c-41d1-b0b6-d56db23c64a9","Type":"ContainerStarted","Data":"004907acbdf476578821911a00653ac50efbfad5ca9b9d1c2101cd90cb15e802"} Dec 09 10:07:13 crc kubenswrapper[4786]: I1209 10:07:13.736517 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5rz9" event={"ID":"3d29156a-544c-41d1-b0b6-d56db23c64a9","Type":"ContainerStarted","Data":"903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431"} Dec 09 10:07:14 crc kubenswrapper[4786]: I1209 10:07:14.355193 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:14 crc kubenswrapper[4786]: I1209 10:07:14.355705 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:14 crc kubenswrapper[4786]: I1209 10:07:14.433010 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:14 crc kubenswrapper[4786]: I1209 10:07:14.831295 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:15 crc kubenswrapper[4786]: I1209 10:07:15.766867 4786 generic.go:334] "Generic (PLEG): container finished" podID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerID="903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431" exitCode=0 Dec 09 10:07:15 crc kubenswrapper[4786]: I1209 10:07:15.766939 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5rz9" event={"ID":"3d29156a-544c-41d1-b0b6-d56db23c64a9","Type":"ContainerDied","Data":"903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431"} Dec 09 10:07:16 crc kubenswrapper[4786]: I1209 10:07:16.781954 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5rz9" event={"ID":"3d29156a-544c-41d1-b0b6-d56db23c64a9","Type":"ContainerStarted","Data":"b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1"} Dec 09 10:07:16 crc kubenswrapper[4786]: I1209 10:07:16.803933 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dhwz"] Dec 09 10:07:16 crc kubenswrapper[4786]: I1209 10:07:16.804247 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6dhwz" podUID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerName="registry-server" containerID="cri-o://88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a" gracePeriod=2 Dec 09 10:07:16 crc kubenswrapper[4786]: I1209 10:07:16.817260 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-b5rz9" podStartSLOduration=3.102519412 podStartE2EDuration="6.817226931s" podCreationTimestamp="2025-12-09 10:07:10 +0000 UTC" firstStartedPulling="2025-12-09 10:07:12.718386917 +0000 UTC m=+4998.602008143" lastFinishedPulling="2025-12-09 10:07:16.433094436 +0000 UTC m=+5002.316715662" observedRunningTime="2025-12-09 10:07:16.805444073 +0000 UTC m=+5002.689065299" watchObservedRunningTime="2025-12-09 10:07:16.817226931 +0000 UTC m=+5002.700848167" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.357542 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.445563 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-utilities\") pod \"67f7886f-a7b6-4f8d-830b-91194c88490b\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.445755 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-catalog-content\") pod \"67f7886f-a7b6-4f8d-830b-91194c88490b\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.446257 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zqtz\" (UniqueName: \"kubernetes.io/projected/67f7886f-a7b6-4f8d-830b-91194c88490b-kube-api-access-5zqtz\") pod \"67f7886f-a7b6-4f8d-830b-91194c88490b\" (UID: \"67f7886f-a7b6-4f8d-830b-91194c88490b\") " Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.446304 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-utilities" (OuterVolumeSpecName: "utilities") pod "67f7886f-a7b6-4f8d-830b-91194c88490b" (UID: "67f7886f-a7b6-4f8d-830b-91194c88490b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.447839 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.462370 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f7886f-a7b6-4f8d-830b-91194c88490b-kube-api-access-5zqtz" (OuterVolumeSpecName: "kube-api-access-5zqtz") pod "67f7886f-a7b6-4f8d-830b-91194c88490b" (UID: "67f7886f-a7b6-4f8d-830b-91194c88490b"). InnerVolumeSpecName "kube-api-access-5zqtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.471068 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "67f7886f-a7b6-4f8d-830b-91194c88490b" (UID: "67f7886f-a7b6-4f8d-830b-91194c88490b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.550725 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zqtz\" (UniqueName: \"kubernetes.io/projected/67f7886f-a7b6-4f8d-830b-91194c88490b-kube-api-access-5zqtz\") on node \"crc\" DevicePath \"\"" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.550794 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/67f7886f-a7b6-4f8d-830b-91194c88490b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.798559 4786 generic.go:334] "Generic (PLEG): container finished" podID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerID="88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a" exitCode=0 Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.798794 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dhwz" event={"ID":"67f7886f-a7b6-4f8d-830b-91194c88490b","Type":"ContainerDied","Data":"88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a"} Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.799803 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6dhwz" event={"ID":"67f7886f-a7b6-4f8d-830b-91194c88490b","Type":"ContainerDied","Data":"6c597916448fe2c28d537275ca4114d7d2848489898faea1188477336072b89c"} Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.799850 4786 scope.go:117] "RemoveContainer" containerID="88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.798911 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6dhwz" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.842954 4786 scope.go:117] "RemoveContainer" containerID="22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.860192 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dhwz"] Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.878456 4786 scope.go:117] "RemoveContainer" containerID="079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.883962 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6dhwz"] Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.954185 4786 scope.go:117] "RemoveContainer" containerID="88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a" Dec 09 10:07:17 crc kubenswrapper[4786]: E1209 10:07:17.955123 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a\": container with ID starting with 88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a not found: ID does not exist" containerID="88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.955202 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a"} err="failed to get container status \"88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a\": rpc error: code = NotFound desc = could not find container \"88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a\": container with ID starting with 88a5b193888aced55d1c72a2f6a7f51674a8f971ae284e008dd9433a643a805a not found: ID does not exist" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.955254 4786 scope.go:117] "RemoveContainer" containerID="22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19" Dec 09 10:07:17 crc kubenswrapper[4786]: E1209 10:07:17.955797 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19\": container with ID starting with 22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19 not found: ID does not exist" containerID="22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.955871 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19"} err="failed to get container status \"22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19\": rpc error: code = NotFound desc = could not find container \"22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19\": container with ID starting with 22e64cf7131152d0f2d80363be13b6b5b1c2ecb3ee16168cc8988c8d2824ab19 not found: ID does not exist" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.955924 4786 scope.go:117] "RemoveContainer" containerID="079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a" Dec 09 10:07:17 crc kubenswrapper[4786]: E1209 10:07:17.956320 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a\": container with ID starting with 079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a not found: ID does not exist" containerID="079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a" Dec 09 10:07:17 crc kubenswrapper[4786]: I1209 10:07:17.956365 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a"} err="failed to get container status \"079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a\": rpc error: code = NotFound desc = could not find container \"079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a\": container with ID starting with 079ddde7fb8bf76352735f2c1b043208ac1c58976e037c417cc66c2019b1499a not found: ID does not exist" Dec 09 10:07:19 crc kubenswrapper[4786]: I1209 10:07:19.200973 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f7886f-a7b6-4f8d-830b-91194c88490b" path="/var/lib/kubelet/pods/67f7886f-a7b6-4f8d-830b-91194c88490b/volumes" Dec 09 10:07:21 crc kubenswrapper[4786]: I1209 10:07:21.395583 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:21 crc kubenswrapper[4786]: I1209 10:07:21.397331 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:21 crc kubenswrapper[4786]: I1209 10:07:21.452386 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:21 crc kubenswrapper[4786]: I1209 10:07:21.904219 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:22 crc kubenswrapper[4786]: I1209 10:07:22.797741 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5rz9"] Dec 09 10:07:23 crc kubenswrapper[4786]: I1209 10:07:23.861989 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-b5rz9" podUID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerName="registry-server" containerID="cri-o://b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1" gracePeriod=2 Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.436978 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.537868 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-utilities\") pod \"3d29156a-544c-41d1-b0b6-d56db23c64a9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.538011 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-catalog-content\") pod \"3d29156a-544c-41d1-b0b6-d56db23c64a9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.538177 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gslfr\" (UniqueName: \"kubernetes.io/projected/3d29156a-544c-41d1-b0b6-d56db23c64a9-kube-api-access-gslfr\") pod \"3d29156a-544c-41d1-b0b6-d56db23c64a9\" (UID: \"3d29156a-544c-41d1-b0b6-d56db23c64a9\") " Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.539936 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-utilities" (OuterVolumeSpecName: "utilities") pod "3d29156a-544c-41d1-b0b6-d56db23c64a9" (UID: "3d29156a-544c-41d1-b0b6-d56db23c64a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.545933 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d29156a-544c-41d1-b0b6-d56db23c64a9-kube-api-access-gslfr" (OuterVolumeSpecName: "kube-api-access-gslfr") pod "3d29156a-544c-41d1-b0b6-d56db23c64a9" (UID: "3d29156a-544c-41d1-b0b6-d56db23c64a9"). InnerVolumeSpecName "kube-api-access-gslfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.600592 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d29156a-544c-41d1-b0b6-d56db23c64a9" (UID: "3d29156a-544c-41d1-b0b6-d56db23c64a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.643666 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gslfr\" (UniqueName: \"kubernetes.io/projected/3d29156a-544c-41d1-b0b6-d56db23c64a9-kube-api-access-gslfr\") on node \"crc\" DevicePath \"\"" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.644378 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.644519 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d29156a-544c-41d1-b0b6-d56db23c64a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.876788 4786 generic.go:334] "Generic (PLEG): container finished" podID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerID="b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1" exitCode=0 Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.876843 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5rz9" event={"ID":"3d29156a-544c-41d1-b0b6-d56db23c64a9","Type":"ContainerDied","Data":"b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1"} Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.876865 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-b5rz9" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.876892 4786 scope.go:117] "RemoveContainer" containerID="b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.876876 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-b5rz9" event={"ID":"3d29156a-544c-41d1-b0b6-d56db23c64a9","Type":"ContainerDied","Data":"004907acbdf476578821911a00653ac50efbfad5ca9b9d1c2101cd90cb15e802"} Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.907918 4786 scope.go:117] "RemoveContainer" containerID="903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.924629 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-b5rz9"] Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.932915 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-b5rz9"] Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.947471 4786 scope.go:117] "RemoveContainer" containerID="581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75" Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.989393 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:07:24 crc kubenswrapper[4786]: I1209 10:07:24.989610 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:07:25 crc kubenswrapper[4786]: I1209 10:07:25.009330 4786 scope.go:117] "RemoveContainer" containerID="b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1" Dec 09 10:07:25 crc kubenswrapper[4786]: E1209 10:07:25.009999 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1\": container with ID starting with b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1 not found: ID does not exist" containerID="b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1" Dec 09 10:07:25 crc kubenswrapper[4786]: I1209 10:07:25.010075 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1"} err="failed to get container status \"b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1\": rpc error: code = NotFound desc = could not find container \"b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1\": container with ID starting with b235b8ac0034eec0e63779cb67120bcd5547fe80ad88253108af24d71fbf9aa1 not found: ID does not exist" Dec 09 10:07:25 crc kubenswrapper[4786]: I1209 10:07:25.010113 4786 scope.go:117] "RemoveContainer" containerID="903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431" Dec 09 10:07:25 crc kubenswrapper[4786]: E1209 10:07:25.012348 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431\": container with ID starting with 903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431 not found: ID does not exist" containerID="903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431" Dec 09 10:07:25 crc kubenswrapper[4786]: I1209 10:07:25.012382 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431"} err="failed to get container status \"903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431\": rpc error: code = NotFound desc = could not find container \"903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431\": container with ID starting with 903668543c19837464ce3e450c44455ed2f3bf0b3d10853f9de24b83a11a2431 not found: ID does not exist" Dec 09 10:07:25 crc kubenswrapper[4786]: I1209 10:07:25.012404 4786 scope.go:117] "RemoveContainer" containerID="581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75" Dec 09 10:07:25 crc kubenswrapper[4786]: E1209 10:07:25.013122 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75\": container with ID starting with 581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75 not found: ID does not exist" containerID="581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75" Dec 09 10:07:25 crc kubenswrapper[4786]: I1209 10:07:25.013152 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75"} err="failed to get container status \"581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75\": rpc error: code = NotFound desc = could not find container \"581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75\": container with ID starting with 581828049e171c11dcbdaf8871aacebe50c66d78b7c76b033ac3b901caf22a75 not found: ID does not exist" Dec 09 10:07:25 crc kubenswrapper[4786]: I1209 10:07:25.201597 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d29156a-544c-41d1-b0b6-d56db23c64a9" path="/var/lib/kubelet/pods/3d29156a-544c-41d1-b0b6-d56db23c64a9/volumes" Dec 09 10:07:54 crc kubenswrapper[4786]: I1209 10:07:54.989073 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:07:54 crc kubenswrapper[4786]: I1209 10:07:54.989689 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:08:24 crc kubenswrapper[4786]: I1209 10:08:24.989037 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:08:24 crc kubenswrapper[4786]: I1209 10:08:24.989601 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:08:24 crc kubenswrapper[4786]: I1209 10:08:24.989645 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 10:08:24 crc kubenswrapper[4786]: I1209 10:08:24.990526 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"870f681940df81ca036d7205c4724f03002e04f58bad8754206c5f32463b67b9"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 10:08:24 crc kubenswrapper[4786]: I1209 10:08:24.990594 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://870f681940df81ca036d7205c4724f03002e04f58bad8754206c5f32463b67b9" gracePeriod=600 Dec 09 10:08:25 crc kubenswrapper[4786]: I1209 10:08:25.521489 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="870f681940df81ca036d7205c4724f03002e04f58bad8754206c5f32463b67b9" exitCode=0 Dec 09 10:08:25 crc kubenswrapper[4786]: I1209 10:08:25.521572 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"870f681940df81ca036d7205c4724f03002e04f58bad8754206c5f32463b67b9"} Dec 09 10:08:25 crc kubenswrapper[4786]: I1209 10:08:25.521828 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3"} Dec 09 10:08:25 crc kubenswrapper[4786]: I1209 10:08:25.521851 4786 scope.go:117] "RemoveContainer" containerID="a6c23a9a79b1152227ae7dcd8b4db9db02c9da83f2fdde0c87c8f42f26a6132c" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.637377 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7k59b"] Dec 09 10:10:26 crc kubenswrapper[4786]: E1209 10:10:26.638388 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerName="extract-utilities" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.638402 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerName="extract-utilities" Dec 09 10:10:26 crc kubenswrapper[4786]: E1209 10:10:26.638420 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerName="registry-server" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.638443 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerName="registry-server" Dec 09 10:10:26 crc kubenswrapper[4786]: E1209 10:10:26.638461 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerName="extract-content" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.638469 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerName="extract-content" Dec 09 10:10:26 crc kubenswrapper[4786]: E1209 10:10:26.638486 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerName="extract-utilities" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.638492 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerName="extract-utilities" Dec 09 10:10:26 crc kubenswrapper[4786]: E1209 10:10:26.638504 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerName="registry-server" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.638510 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerName="registry-server" Dec 09 10:10:26 crc kubenswrapper[4786]: E1209 10:10:26.638530 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerName="extract-content" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.638535 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerName="extract-content" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.638725 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d29156a-544c-41d1-b0b6-d56db23c64a9" containerName="registry-server" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.638755 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f7886f-a7b6-4f8d-830b-91194c88490b" containerName="registry-server" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.640337 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.649776 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8pxt\" (UniqueName: \"kubernetes.io/projected/b39664f0-5812-4f11-9fe3-5e4eb84073dd-kube-api-access-g8pxt\") pod \"redhat-operators-7k59b\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.649908 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-catalog-content\") pod \"redhat-operators-7k59b\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.650011 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-utilities\") pod \"redhat-operators-7k59b\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.658617 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7k59b"] Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.750940 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-utilities\") pod \"redhat-operators-7k59b\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.751113 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8pxt\" (UniqueName: \"kubernetes.io/projected/b39664f0-5812-4f11-9fe3-5e4eb84073dd-kube-api-access-g8pxt\") pod \"redhat-operators-7k59b\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.751189 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-catalog-content\") pod \"redhat-operators-7k59b\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.751406 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-utilities\") pod \"redhat-operators-7k59b\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.751654 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-catalog-content\") pod \"redhat-operators-7k59b\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.778607 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8pxt\" (UniqueName: \"kubernetes.io/projected/b39664f0-5812-4f11-9fe3-5e4eb84073dd-kube-api-access-g8pxt\") pod \"redhat-operators-7k59b\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:26 crc kubenswrapper[4786]: I1209 10:10:26.968818 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:27 crc kubenswrapper[4786]: I1209 10:10:27.512251 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7k59b"] Dec 09 10:10:27 crc kubenswrapper[4786]: I1209 10:10:27.822195 4786 generic.go:334] "Generic (PLEG): container finished" podID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerID="193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98" exitCode=0 Dec 09 10:10:27 crc kubenswrapper[4786]: I1209 10:10:27.822279 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k59b" event={"ID":"b39664f0-5812-4f11-9fe3-5e4eb84073dd","Type":"ContainerDied","Data":"193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98"} Dec 09 10:10:27 crc kubenswrapper[4786]: I1209 10:10:27.823776 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k59b" event={"ID":"b39664f0-5812-4f11-9fe3-5e4eb84073dd","Type":"ContainerStarted","Data":"e8e5a6169c83f0094efdc8ed7cc1ba3f8b9048e1841db90ad2a40c02318c86d3"} Dec 09 10:10:28 crc kubenswrapper[4786]: I1209 10:10:28.834715 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k59b" event={"ID":"b39664f0-5812-4f11-9fe3-5e4eb84073dd","Type":"ContainerStarted","Data":"0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd"} Dec 09 10:10:33 crc kubenswrapper[4786]: I1209 10:10:33.127641 4786 generic.go:334] "Generic (PLEG): container finished" podID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerID="0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd" exitCode=0 Dec 09 10:10:33 crc kubenswrapper[4786]: I1209 10:10:33.127710 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k59b" event={"ID":"b39664f0-5812-4f11-9fe3-5e4eb84073dd","Type":"ContainerDied","Data":"0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd"} Dec 09 10:10:34 crc kubenswrapper[4786]: I1209 10:10:34.141598 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k59b" event={"ID":"b39664f0-5812-4f11-9fe3-5e4eb84073dd","Type":"ContainerStarted","Data":"770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56"} Dec 09 10:10:34 crc kubenswrapper[4786]: I1209 10:10:34.177021 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7k59b" podStartSLOduration=2.463237292 podStartE2EDuration="8.176981824s" podCreationTimestamp="2025-12-09 10:10:26 +0000 UTC" firstStartedPulling="2025-12-09 10:10:27.823936946 +0000 UTC m=+5193.707558172" lastFinishedPulling="2025-12-09 10:10:33.537681468 +0000 UTC m=+5199.421302704" observedRunningTime="2025-12-09 10:10:34.165568865 +0000 UTC m=+5200.049190101" watchObservedRunningTime="2025-12-09 10:10:34.176981824 +0000 UTC m=+5200.060603050" Dec 09 10:10:36 crc kubenswrapper[4786]: I1209 10:10:36.969315 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:36 crc kubenswrapper[4786]: I1209 10:10:36.969644 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:38 crc kubenswrapper[4786]: I1209 10:10:38.018237 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7k59b" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerName="registry-server" probeResult="failure" output=< Dec 09 10:10:38 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 10:10:38 crc kubenswrapper[4786]: > Dec 09 10:10:47 crc kubenswrapper[4786]: I1209 10:10:47.021628 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:47 crc kubenswrapper[4786]: I1209 10:10:47.070603 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:47 crc kubenswrapper[4786]: I1209 10:10:47.271667 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7k59b"] Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.293063 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7k59b" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerName="registry-server" containerID="cri-o://770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56" gracePeriod=2 Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.770292 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.847529 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8pxt\" (UniqueName: \"kubernetes.io/projected/b39664f0-5812-4f11-9fe3-5e4eb84073dd-kube-api-access-g8pxt\") pod \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.847722 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-utilities\") pod \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.847786 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-catalog-content\") pod \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\" (UID: \"b39664f0-5812-4f11-9fe3-5e4eb84073dd\") " Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.849132 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-utilities" (OuterVolumeSpecName: "utilities") pod "b39664f0-5812-4f11-9fe3-5e4eb84073dd" (UID: "b39664f0-5812-4f11-9fe3-5e4eb84073dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.869934 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39664f0-5812-4f11-9fe3-5e4eb84073dd-kube-api-access-g8pxt" (OuterVolumeSpecName: "kube-api-access-g8pxt") pod "b39664f0-5812-4f11-9fe3-5e4eb84073dd" (UID: "b39664f0-5812-4f11-9fe3-5e4eb84073dd"). InnerVolumeSpecName "kube-api-access-g8pxt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.949820 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.949858 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8pxt\" (UniqueName: \"kubernetes.io/projected/b39664f0-5812-4f11-9fe3-5e4eb84073dd-kube-api-access-g8pxt\") on node \"crc\" DevicePath \"\"" Dec 09 10:10:48 crc kubenswrapper[4786]: I1209 10:10:48.973115 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b39664f0-5812-4f11-9fe3-5e4eb84073dd" (UID: "b39664f0-5812-4f11-9fe3-5e4eb84073dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.052240 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b39664f0-5812-4f11-9fe3-5e4eb84073dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.307927 4786 generic.go:334] "Generic (PLEG): container finished" podID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerID="770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56" exitCode=0 Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.307985 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k59b" event={"ID":"b39664f0-5812-4f11-9fe3-5e4eb84073dd","Type":"ContainerDied","Data":"770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56"} Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.308047 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7k59b" event={"ID":"b39664f0-5812-4f11-9fe3-5e4eb84073dd","Type":"ContainerDied","Data":"e8e5a6169c83f0094efdc8ed7cc1ba3f8b9048e1841db90ad2a40c02318c86d3"} Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.308069 4786 scope.go:117] "RemoveContainer" containerID="770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.308214 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7k59b" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.332308 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7k59b"] Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.337543 4786 scope.go:117] "RemoveContainer" containerID="0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.341196 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7k59b"] Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.364666 4786 scope.go:117] "RemoveContainer" containerID="193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.415582 4786 scope.go:117] "RemoveContainer" containerID="770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56" Dec 09 10:10:49 crc kubenswrapper[4786]: E1209 10:10:49.416062 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56\": container with ID starting with 770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56 not found: ID does not exist" containerID="770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.416095 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56"} err="failed to get container status \"770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56\": rpc error: code = NotFound desc = could not find container \"770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56\": container with ID starting with 770a9131111c3cee4caf747291749b5078f5b7b20e65cd17d29cbf9a22867a56 not found: ID does not exist" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.416116 4786 scope.go:117] "RemoveContainer" containerID="0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd" Dec 09 10:10:49 crc kubenswrapper[4786]: E1209 10:10:49.416496 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd\": container with ID starting with 0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd not found: ID does not exist" containerID="0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.416523 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd"} err="failed to get container status \"0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd\": rpc error: code = NotFound desc = could not find container \"0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd\": container with ID starting with 0d5b525af87c4e88ea6fa438cd75f568b4d831e26b3aa2997a2304f0b0f06bcd not found: ID does not exist" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.416538 4786 scope.go:117] "RemoveContainer" containerID="193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98" Dec 09 10:10:49 crc kubenswrapper[4786]: E1209 10:10:49.416830 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98\": container with ID starting with 193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98 not found: ID does not exist" containerID="193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98" Dec 09 10:10:49 crc kubenswrapper[4786]: I1209 10:10:49.416851 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98"} err="failed to get container status \"193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98\": rpc error: code = NotFound desc = could not find container \"193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98\": container with ID starting with 193a66837387fde495f5e9e7d353a2be59b6852b36dc52f42dc4e7ab6ba30f98 not found: ID does not exist" Dec 09 10:10:51 crc kubenswrapper[4786]: I1209 10:10:51.199542 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" path="/var/lib/kubelet/pods/b39664f0-5812-4f11-9fe3-5e4eb84073dd/volumes" Dec 09 10:10:54 crc kubenswrapper[4786]: I1209 10:10:54.988973 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:10:54 crc kubenswrapper[4786]: I1209 10:10:54.989691 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:11:24 crc kubenswrapper[4786]: I1209 10:11:24.988699 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:11:24 crc kubenswrapper[4786]: I1209 10:11:24.989356 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:11:54 crc kubenswrapper[4786]: I1209 10:11:54.988299 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:11:54 crc kubenswrapper[4786]: I1209 10:11:54.988806 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:11:54 crc kubenswrapper[4786]: I1209 10:11:54.988880 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 10:11:54 crc kubenswrapper[4786]: I1209 10:11:54.989782 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 10:11:54 crc kubenswrapper[4786]: I1209 10:11:54.989838 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" gracePeriod=600 Dec 09 10:11:55 crc kubenswrapper[4786]: E1209 10:11:55.135959 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:11:56 crc kubenswrapper[4786]: I1209 10:11:56.013076 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" exitCode=0 Dec 09 10:11:56 crc kubenswrapper[4786]: I1209 10:11:56.013134 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3"} Dec 09 10:11:56 crc kubenswrapper[4786]: I1209 10:11:56.013197 4786 scope.go:117] "RemoveContainer" containerID="870f681940df81ca036d7205c4724f03002e04f58bad8754206c5f32463b67b9" Dec 09 10:11:56 crc kubenswrapper[4786]: I1209 10:11:56.014107 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:11:56 crc kubenswrapper[4786]: E1209 10:11:56.014686 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:12:10 crc kubenswrapper[4786]: I1209 10:12:10.187827 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:12:10 crc kubenswrapper[4786]: E1209 10:12:10.188878 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:12:23 crc kubenswrapper[4786]: I1209 10:12:23.189156 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:12:23 crc kubenswrapper[4786]: E1209 10:12:23.190010 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:12:38 crc kubenswrapper[4786]: I1209 10:12:38.188391 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:12:38 crc kubenswrapper[4786]: E1209 10:12:38.189835 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:12:50 crc kubenswrapper[4786]: I1209 10:12:50.188316 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:12:50 crc kubenswrapper[4786]: E1209 10:12:50.189098 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:13:05 crc kubenswrapper[4786]: I1209 10:13:05.195520 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:13:05 crc kubenswrapper[4786]: E1209 10:13:05.196290 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:13:16 crc kubenswrapper[4786]: I1209 10:13:16.188667 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:13:16 crc kubenswrapper[4786]: E1209 10:13:16.190530 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.187868 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:13:29 crc kubenswrapper[4786]: E1209 10:13:29.188671 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.394695 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5k7t8"] Dec 09 10:13:29 crc kubenswrapper[4786]: E1209 10:13:29.395376 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerName="extract-utilities" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.395404 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerName="extract-utilities" Dec 09 10:13:29 crc kubenswrapper[4786]: E1209 10:13:29.395442 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerName="extract-content" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.395451 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerName="extract-content" Dec 09 10:13:29 crc kubenswrapper[4786]: E1209 10:13:29.395465 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerName="registry-server" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.395473 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerName="registry-server" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.395725 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39664f0-5812-4f11-9fe3-5e4eb84073dd" containerName="registry-server" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.397653 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.403547 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5k7t8"] Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.563828 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwm4v\" (UniqueName: \"kubernetes.io/projected/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-kube-api-access-qwm4v\") pod \"community-operators-5k7t8\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.563881 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-utilities\") pod \"community-operators-5k7t8\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.563909 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-catalog-content\") pod \"community-operators-5k7t8\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.665625 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwm4v\" (UniqueName: \"kubernetes.io/projected/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-kube-api-access-qwm4v\") pod \"community-operators-5k7t8\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.666245 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-utilities\") pod \"community-operators-5k7t8\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.666362 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-catalog-content\") pod \"community-operators-5k7t8\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.667030 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-utilities\") pod \"community-operators-5k7t8\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.667076 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-catalog-content\") pod \"community-operators-5k7t8\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.696919 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwm4v\" (UniqueName: \"kubernetes.io/projected/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-kube-api-access-qwm4v\") pod \"community-operators-5k7t8\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:29 crc kubenswrapper[4786]: I1209 10:13:29.725820 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:30 crc kubenswrapper[4786]: I1209 10:13:30.353691 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5k7t8"] Dec 09 10:13:30 crc kubenswrapper[4786]: I1209 10:13:30.958582 4786 generic.go:334] "Generic (PLEG): container finished" podID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerID="4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750" exitCode=0 Dec 09 10:13:30 crc kubenswrapper[4786]: I1209 10:13:30.958717 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k7t8" event={"ID":"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46","Type":"ContainerDied","Data":"4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750"} Dec 09 10:13:30 crc kubenswrapper[4786]: I1209 10:13:30.959005 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k7t8" event={"ID":"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46","Type":"ContainerStarted","Data":"4d19b1e805f767e4e18acf742a33464ea50a72d798a844684087bc13a4703ff4"} Dec 09 10:13:30 crc kubenswrapper[4786]: I1209 10:13:30.962754 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 10:13:31 crc kubenswrapper[4786]: I1209 10:13:31.973531 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k7t8" event={"ID":"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46","Type":"ContainerStarted","Data":"1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2"} Dec 09 10:13:32 crc kubenswrapper[4786]: I1209 10:13:32.991984 4786 generic.go:334] "Generic (PLEG): container finished" podID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerID="1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2" exitCode=0 Dec 09 10:13:32 crc kubenswrapper[4786]: I1209 10:13:32.992060 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k7t8" event={"ID":"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46","Type":"ContainerDied","Data":"1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2"} Dec 09 10:13:34 crc kubenswrapper[4786]: I1209 10:13:34.006095 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k7t8" event={"ID":"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46","Type":"ContainerStarted","Data":"0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9"} Dec 09 10:13:34 crc kubenswrapper[4786]: I1209 10:13:34.029750 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5k7t8" podStartSLOduration=2.525831805 podStartE2EDuration="5.02972366s" podCreationTimestamp="2025-12-09 10:13:29 +0000 UTC" firstStartedPulling="2025-12-09 10:13:30.962469366 +0000 UTC m=+5376.846090592" lastFinishedPulling="2025-12-09 10:13:33.466361221 +0000 UTC m=+5379.349982447" observedRunningTime="2025-12-09 10:13:34.022706088 +0000 UTC m=+5379.906327314" watchObservedRunningTime="2025-12-09 10:13:34.02972366 +0000 UTC m=+5379.913344886" Dec 09 10:13:39 crc kubenswrapper[4786]: I1209 10:13:39.726704 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:39 crc kubenswrapper[4786]: I1209 10:13:39.727519 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:39 crc kubenswrapper[4786]: I1209 10:13:39.777612 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:40 crc kubenswrapper[4786]: I1209 10:13:40.120989 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:40 crc kubenswrapper[4786]: I1209 10:13:40.174987 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5k7t8"] Dec 09 10:13:42 crc kubenswrapper[4786]: I1209 10:13:42.089037 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5k7t8" podUID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerName="registry-server" containerID="cri-o://0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9" gracePeriod=2 Dec 09 10:13:42 crc kubenswrapper[4786]: I1209 10:13:42.188968 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:13:42 crc kubenswrapper[4786]: E1209 10:13:42.189360 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.097713 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.106263 4786 generic.go:334] "Generic (PLEG): container finished" podID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerID="0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9" exitCode=0 Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.106354 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5k7t8" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.106342 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k7t8" event={"ID":"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46","Type":"ContainerDied","Data":"0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9"} Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.106486 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5k7t8" event={"ID":"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46","Type":"ContainerDied","Data":"4d19b1e805f767e4e18acf742a33464ea50a72d798a844684087bc13a4703ff4"} Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.106517 4786 scope.go:117] "RemoveContainer" containerID="0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.137778 4786 scope.go:117] "RemoveContainer" containerID="1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.173500 4786 scope.go:117] "RemoveContainer" containerID="4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.184611 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-utilities\") pod \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.184857 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwm4v\" (UniqueName: \"kubernetes.io/projected/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-kube-api-access-qwm4v\") pod \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.184890 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-catalog-content\") pod \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\" (UID: \"c1ccc4c0-1c6f-4190-9c76-390da3b4ca46\") " Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.187705 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-utilities" (OuterVolumeSpecName: "utilities") pod "c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" (UID: "c1ccc4c0-1c6f-4190-9c76-390da3b4ca46"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.198735 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-kube-api-access-qwm4v" (OuterVolumeSpecName: "kube-api-access-qwm4v") pod "c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" (UID: "c1ccc4c0-1c6f-4190-9c76-390da3b4ca46"). InnerVolumeSpecName "kube-api-access-qwm4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.255073 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" (UID: "c1ccc4c0-1c6f-4190-9c76-390da3b4ca46"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.269266 4786 scope.go:117] "RemoveContainer" containerID="0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9" Dec 09 10:13:43 crc kubenswrapper[4786]: E1209 10:13:43.270098 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9\": container with ID starting with 0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9 not found: ID does not exist" containerID="0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.270147 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9"} err="failed to get container status \"0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9\": rpc error: code = NotFound desc = could not find container \"0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9\": container with ID starting with 0c555d39a9090d79beb5d5ddacb88b3b95786b64ce720dadfb2c409dfedc65b9 not found: ID does not exist" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.270179 4786 scope.go:117] "RemoveContainer" containerID="1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2" Dec 09 10:13:43 crc kubenswrapper[4786]: E1209 10:13:43.270858 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2\": container with ID starting with 1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2 not found: ID does not exist" containerID="1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.270891 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2"} err="failed to get container status \"1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2\": rpc error: code = NotFound desc = could not find container \"1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2\": container with ID starting with 1a808802e780bace5c0ce57476bd97f85744d7996544ebe69955520f246efea2 not found: ID does not exist" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.270911 4786 scope.go:117] "RemoveContainer" containerID="4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750" Dec 09 10:13:43 crc kubenswrapper[4786]: E1209 10:13:43.271344 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750\": container with ID starting with 4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750 not found: ID does not exist" containerID="4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.271581 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750"} err="failed to get container status \"4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750\": rpc error: code = NotFound desc = could not find container \"4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750\": container with ID starting with 4bb417aeb9a20e88accb95fc323c21afd75192fcd8790e32fc04141ab84bb750 not found: ID does not exist" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.289310 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.289584 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.289653 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwm4v\" (UniqueName: \"kubernetes.io/projected/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46-kube-api-access-qwm4v\") on node \"crc\" DevicePath \"\"" Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.445029 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5k7t8"] Dec 09 10:13:43 crc kubenswrapper[4786]: I1209 10:13:43.455628 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5k7t8"] Dec 09 10:13:45 crc kubenswrapper[4786]: I1209 10:13:45.205474 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" path="/var/lib/kubelet/pods/c1ccc4c0-1c6f-4190-9c76-390da3b4ca46/volumes" Dec 09 10:13:55 crc kubenswrapper[4786]: I1209 10:13:55.202724 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:13:55 crc kubenswrapper[4786]: E1209 10:13:55.203766 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:14:10 crc kubenswrapper[4786]: I1209 10:14:10.188648 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:14:10 crc kubenswrapper[4786]: E1209 10:14:10.189581 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:14:22 crc kubenswrapper[4786]: I1209 10:14:22.187996 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:14:22 crc kubenswrapper[4786]: E1209 10:14:22.188796 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:14:34 crc kubenswrapper[4786]: I1209 10:14:34.188587 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:14:34 crc kubenswrapper[4786]: E1209 10:14:34.189348 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:14:49 crc kubenswrapper[4786]: I1209 10:14:49.188520 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:14:49 crc kubenswrapper[4786]: E1209 10:14:49.189859 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.147848 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5"] Dec 09 10:15:00 crc kubenswrapper[4786]: E1209 10:15:00.148907 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerName="registry-server" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.148926 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerName="registry-server" Dec 09 10:15:00 crc kubenswrapper[4786]: E1209 10:15:00.148951 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerName="extract-content" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.148959 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerName="extract-content" Dec 09 10:15:00 crc kubenswrapper[4786]: E1209 10:15:00.148998 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerName="extract-utilities" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.149007 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerName="extract-utilities" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.149258 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1ccc4c0-1c6f-4190-9c76-390da3b4ca46" containerName="registry-server" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.150297 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.152706 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.152811 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.172071 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5"] Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.349394 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4254aed-08d1-470d-8a85-7dcac5defbb9-secret-volume\") pod \"collect-profiles-29421255-c2gs5\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.349482 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlz7z\" (UniqueName: \"kubernetes.io/projected/d4254aed-08d1-470d-8a85-7dcac5defbb9-kube-api-access-mlz7z\") pod \"collect-profiles-29421255-c2gs5\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.349559 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4254aed-08d1-470d-8a85-7dcac5defbb9-config-volume\") pod \"collect-profiles-29421255-c2gs5\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.452167 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4254aed-08d1-470d-8a85-7dcac5defbb9-secret-volume\") pod \"collect-profiles-29421255-c2gs5\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.452221 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlz7z\" (UniqueName: \"kubernetes.io/projected/d4254aed-08d1-470d-8a85-7dcac5defbb9-kube-api-access-mlz7z\") pod \"collect-profiles-29421255-c2gs5\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.452291 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4254aed-08d1-470d-8a85-7dcac5defbb9-config-volume\") pod \"collect-profiles-29421255-c2gs5\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.453822 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4254aed-08d1-470d-8a85-7dcac5defbb9-config-volume\") pod \"collect-profiles-29421255-c2gs5\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.459414 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4254aed-08d1-470d-8a85-7dcac5defbb9-secret-volume\") pod \"collect-profiles-29421255-c2gs5\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.469852 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlz7z\" (UniqueName: \"kubernetes.io/projected/d4254aed-08d1-470d-8a85-7dcac5defbb9-kube-api-access-mlz7z\") pod \"collect-profiles-29421255-c2gs5\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.507194 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:00 crc kubenswrapper[4786]: I1209 10:15:00.961407 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5"] Dec 09 10:15:01 crc kubenswrapper[4786]: I1209 10:15:01.885299 4786 generic.go:334] "Generic (PLEG): container finished" podID="d4254aed-08d1-470d-8a85-7dcac5defbb9" containerID="0af472c5e0b2addc3eb0c80ee1704b1a56a8a388e8a70e73e4f59810072ebd0c" exitCode=0 Dec 09 10:15:01 crc kubenswrapper[4786]: I1209 10:15:01.885380 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" event={"ID":"d4254aed-08d1-470d-8a85-7dcac5defbb9","Type":"ContainerDied","Data":"0af472c5e0b2addc3eb0c80ee1704b1a56a8a388e8a70e73e4f59810072ebd0c"} Dec 09 10:15:01 crc kubenswrapper[4786]: I1209 10:15:01.885749 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" event={"ID":"d4254aed-08d1-470d-8a85-7dcac5defbb9","Type":"ContainerStarted","Data":"249fa643e1eae46e2d1443bb2d5215305be5cf4d98411bfa2ebfdb0f1a911aff"} Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.237761 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.418525 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4254aed-08d1-470d-8a85-7dcac5defbb9-config-volume\") pod \"d4254aed-08d1-470d-8a85-7dcac5defbb9\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.418638 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mlz7z\" (UniqueName: \"kubernetes.io/projected/d4254aed-08d1-470d-8a85-7dcac5defbb9-kube-api-access-mlz7z\") pod \"d4254aed-08d1-470d-8a85-7dcac5defbb9\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.418812 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4254aed-08d1-470d-8a85-7dcac5defbb9-secret-volume\") pod \"d4254aed-08d1-470d-8a85-7dcac5defbb9\" (UID: \"d4254aed-08d1-470d-8a85-7dcac5defbb9\") " Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.418995 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d4254aed-08d1-470d-8a85-7dcac5defbb9-config-volume" (OuterVolumeSpecName: "config-volume") pod "d4254aed-08d1-470d-8a85-7dcac5defbb9" (UID: "d4254aed-08d1-470d-8a85-7dcac5defbb9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.419252 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4254aed-08d1-470d-8a85-7dcac5defbb9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.424672 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d4254aed-08d1-470d-8a85-7dcac5defbb9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d4254aed-08d1-470d-8a85-7dcac5defbb9" (UID: "d4254aed-08d1-470d-8a85-7dcac5defbb9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.425278 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4254aed-08d1-470d-8a85-7dcac5defbb9-kube-api-access-mlz7z" (OuterVolumeSpecName: "kube-api-access-mlz7z") pod "d4254aed-08d1-470d-8a85-7dcac5defbb9" (UID: "d4254aed-08d1-470d-8a85-7dcac5defbb9"). InnerVolumeSpecName "kube-api-access-mlz7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.521527 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mlz7z\" (UniqueName: \"kubernetes.io/projected/d4254aed-08d1-470d-8a85-7dcac5defbb9-kube-api-access-mlz7z\") on node \"crc\" DevicePath \"\"" Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.521831 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d4254aed-08d1-470d-8a85-7dcac5defbb9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.904096 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" event={"ID":"d4254aed-08d1-470d-8a85-7dcac5defbb9","Type":"ContainerDied","Data":"249fa643e1eae46e2d1443bb2d5215305be5cf4d98411bfa2ebfdb0f1a911aff"} Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.904140 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="249fa643e1eae46e2d1443bb2d5215305be5cf4d98411bfa2ebfdb0f1a911aff" Dec 09 10:15:03 crc kubenswrapper[4786]: I1209 10:15:03.904533 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421255-c2gs5" Dec 09 10:15:04 crc kubenswrapper[4786]: I1209 10:15:04.189195 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:15:04 crc kubenswrapper[4786]: E1209 10:15:04.189568 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:15:04 crc kubenswrapper[4786]: I1209 10:15:04.368309 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2"] Dec 09 10:15:04 crc kubenswrapper[4786]: I1209 10:15:04.384312 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421210-fnxp2"] Dec 09 10:15:05 crc kubenswrapper[4786]: I1209 10:15:05.200594 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="774462f3-ab4d-46e2-9966-6d8752fcaa46" path="/var/lib/kubelet/pods/774462f3-ab4d-46e2-9966-6d8752fcaa46/volumes" Dec 09 10:15:06 crc kubenswrapper[4786]: I1209 10:15:06.422853 4786 scope.go:117] "RemoveContainer" containerID="1450ac8980307122aad29093453ee6c32dbb0673d11850bcf18f3000197d86ab" Dec 09 10:15:17 crc kubenswrapper[4786]: I1209 10:15:17.188170 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:15:17 crc kubenswrapper[4786]: E1209 10:15:17.189035 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:15:29 crc kubenswrapper[4786]: I1209 10:15:29.190113 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:15:29 crc kubenswrapper[4786]: E1209 10:15:29.191086 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:15:43 crc kubenswrapper[4786]: I1209 10:15:43.189047 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:15:43 crc kubenswrapper[4786]: E1209 10:15:43.189924 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:15:56 crc kubenswrapper[4786]: I1209 10:15:56.188940 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:15:56 crc kubenswrapper[4786]: E1209 10:15:56.190006 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:16:10 crc kubenswrapper[4786]: I1209 10:16:10.188335 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:16:10 crc kubenswrapper[4786]: E1209 10:16:10.189194 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:16:23 crc kubenswrapper[4786]: I1209 10:16:23.189196 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:16:23 crc kubenswrapper[4786]: E1209 10:16:23.190008 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:16:35 crc kubenswrapper[4786]: I1209 10:16:35.198470 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:16:35 crc kubenswrapper[4786]: E1209 10:16:35.199290 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:16:46 crc kubenswrapper[4786]: I1209 10:16:46.188009 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:16:46 crc kubenswrapper[4786]: E1209 10:16:46.189003 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:16:59 crc kubenswrapper[4786]: I1209 10:16:59.188394 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:17:00 crc kubenswrapper[4786]: I1209 10:17:00.385271 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"f0db13fa3b4c8b38759dcf670bd52efbefab22fe875583a9fcc3f80bfcbf0a33"} Dec 09 10:17:04 crc kubenswrapper[4786]: I1209 10:17:04.421698 4786 generic.go:334] "Generic (PLEG): container finished" podID="0112bf44-5116-4b72-a860-4fc091e5dc27" containerID="ffef5b08f2f2912c6df065eb6958d26c86bf33406b2579f3912d5138d4c81264" exitCode=0 Dec 09 10:17:04 crc kubenswrapper[4786]: I1209 10:17:04.421794 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0112bf44-5116-4b72-a860-4fc091e5dc27","Type":"ContainerDied","Data":"ffef5b08f2f2912c6df065eb6958d26c86bf33406b2579f3912d5138d4c81264"} Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.845329 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945058 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ssh-key\") pod \"0112bf44-5116-4b72-a860-4fc091e5dc27\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945189 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-workdir\") pod \"0112bf44-5116-4b72-a860-4fc091e5dc27\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945273 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-temporary\") pod \"0112bf44-5116-4b72-a860-4fc091e5dc27\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945442 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ca-certs\") pod \"0112bf44-5116-4b72-a860-4fc091e5dc27\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945496 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdk5s\" (UniqueName: \"kubernetes.io/projected/0112bf44-5116-4b72-a860-4fc091e5dc27-kube-api-access-kdk5s\") pod \"0112bf44-5116-4b72-a860-4fc091e5dc27\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945570 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config\") pod \"0112bf44-5116-4b72-a860-4fc091e5dc27\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945647 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"0112bf44-5116-4b72-a860-4fc091e5dc27\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945737 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config-secret\") pod \"0112bf44-5116-4b72-a860-4fc091e5dc27\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945788 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "0112bf44-5116-4b72-a860-4fc091e5dc27" (UID: "0112bf44-5116-4b72-a860-4fc091e5dc27"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.945820 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-config-data\") pod \"0112bf44-5116-4b72-a860-4fc091e5dc27\" (UID: \"0112bf44-5116-4b72-a860-4fc091e5dc27\") " Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.946389 4786 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.946724 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-config-data" (OuterVolumeSpecName: "config-data") pod "0112bf44-5116-4b72-a860-4fc091e5dc27" (UID: "0112bf44-5116-4b72-a860-4fc091e5dc27"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.950914 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "0112bf44-5116-4b72-a860-4fc091e5dc27" (UID: "0112bf44-5116-4b72-a860-4fc091e5dc27"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.975684 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0112bf44-5116-4b72-a860-4fc091e5dc27-kube-api-access-kdk5s" (OuterVolumeSpecName: "kube-api-access-kdk5s") pod "0112bf44-5116-4b72-a860-4fc091e5dc27" (UID: "0112bf44-5116-4b72-a860-4fc091e5dc27"). InnerVolumeSpecName "kube-api-access-kdk5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.977700 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "test-operator-logs") pod "0112bf44-5116-4b72-a860-4fc091e5dc27" (UID: "0112bf44-5116-4b72-a860-4fc091e5dc27"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 09 10:17:05 crc kubenswrapper[4786]: I1209 10:17:05.981715 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "0112bf44-5116-4b72-a860-4fc091e5dc27" (UID: "0112bf44-5116-4b72-a860-4fc091e5dc27"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.000864 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "0112bf44-5116-4b72-a860-4fc091e5dc27" (UID: "0112bf44-5116-4b72-a860-4fc091e5dc27"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.009712 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0112bf44-5116-4b72-a860-4fc091e5dc27" (UID: "0112bf44-5116-4b72-a860-4fc091e5dc27"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.021263 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "0112bf44-5116-4b72-a860-4fc091e5dc27" (UID: "0112bf44-5116-4b72-a860-4fc091e5dc27"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.048887 4786 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.048921 4786 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.048933 4786 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-config-data\") on node \"crc\" DevicePath \"\"" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.048942 4786 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.048952 4786 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/0112bf44-5116-4b72-a860-4fc091e5dc27-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.048961 4786 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/0112bf44-5116-4b72-a860-4fc091e5dc27-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.048970 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdk5s\" (UniqueName: \"kubernetes.io/projected/0112bf44-5116-4b72-a860-4fc091e5dc27-kube-api-access-kdk5s\") on node \"crc\" DevicePath \"\"" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.048978 4786 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/0112bf44-5116-4b72-a860-4fc091e5dc27-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.070556 4786 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.151340 4786 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.445171 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"0112bf44-5116-4b72-a860-4fc091e5dc27","Type":"ContainerDied","Data":"405f24b6fa6f393581784144a41e2e2267a00b7327299ece5627023ab61c12d3"} Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.445495 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="405f24b6fa6f393581784144a41e2e2267a00b7327299ece5627023ab61c12d3" Dec 09 10:17:06 crc kubenswrapper[4786]: I1209 10:17:06.445256 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 09 10:17:17 crc kubenswrapper[4786]: I1209 10:17:17.916955 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 09 10:17:17 crc kubenswrapper[4786]: E1209 10:17:17.918113 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4254aed-08d1-470d-8a85-7dcac5defbb9" containerName="collect-profiles" Dec 09 10:17:17 crc kubenswrapper[4786]: I1209 10:17:17.918131 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4254aed-08d1-470d-8a85-7dcac5defbb9" containerName="collect-profiles" Dec 09 10:17:17 crc kubenswrapper[4786]: E1209 10:17:17.918170 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0112bf44-5116-4b72-a860-4fc091e5dc27" containerName="tempest-tests-tempest-tests-runner" Dec 09 10:17:17 crc kubenswrapper[4786]: I1209 10:17:17.918179 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="0112bf44-5116-4b72-a860-4fc091e5dc27" containerName="tempest-tests-tempest-tests-runner" Dec 09 10:17:17 crc kubenswrapper[4786]: I1209 10:17:17.918460 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4254aed-08d1-470d-8a85-7dcac5defbb9" containerName="collect-profiles" Dec 09 10:17:17 crc kubenswrapper[4786]: I1209 10:17:17.918491 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="0112bf44-5116-4b72-a860-4fc091e5dc27" containerName="tempest-tests-tempest-tests-runner" Dec 09 10:17:17 crc kubenswrapper[4786]: I1209 10:17:17.919471 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 09 10:17:17 crc kubenswrapper[4786]: I1209 10:17:17.922059 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-dpx84" Dec 09 10:17:17 crc kubenswrapper[4786]: I1209 10:17:17.929148 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 09 10:17:18 crc kubenswrapper[4786]: I1209 10:17:18.100417 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ffa3268-7c4f-4069-bc33-50db10708dce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 09 10:17:18 crc kubenswrapper[4786]: I1209 10:17:18.100884 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bw9dw\" (UniqueName: \"kubernetes.io/projected/0ffa3268-7c4f-4069-bc33-50db10708dce-kube-api-access-bw9dw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ffa3268-7c4f-4069-bc33-50db10708dce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 09 10:17:18 crc kubenswrapper[4786]: I1209 10:17:18.203132 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bw9dw\" (UniqueName: \"kubernetes.io/projected/0ffa3268-7c4f-4069-bc33-50db10708dce-kube-api-access-bw9dw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ffa3268-7c4f-4069-bc33-50db10708dce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 09 10:17:18 crc kubenswrapper[4786]: I1209 10:17:18.203259 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ffa3268-7c4f-4069-bc33-50db10708dce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 09 10:17:18 crc kubenswrapper[4786]: I1209 10:17:18.203774 4786 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ffa3268-7c4f-4069-bc33-50db10708dce\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 09 10:17:18 crc kubenswrapper[4786]: I1209 10:17:18.227585 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bw9dw\" (UniqueName: \"kubernetes.io/projected/0ffa3268-7c4f-4069-bc33-50db10708dce-kube-api-access-bw9dw\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ffa3268-7c4f-4069-bc33-50db10708dce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 09 10:17:18 crc kubenswrapper[4786]: I1209 10:17:18.232215 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"0ffa3268-7c4f-4069-bc33-50db10708dce\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 09 10:17:18 crc kubenswrapper[4786]: I1209 10:17:18.296058 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 09 10:17:18 crc kubenswrapper[4786]: I1209 10:17:18.756322 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 09 10:17:19 crc kubenswrapper[4786]: I1209 10:17:19.592017 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0ffa3268-7c4f-4069-bc33-50db10708dce","Type":"ContainerStarted","Data":"d0c058d5fb752af4a9ddae1eb02df3dce23732e4fef151185fe2f4dfaba6643b"} Dec 09 10:17:20 crc kubenswrapper[4786]: I1209 10:17:20.601042 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"0ffa3268-7c4f-4069-bc33-50db10708dce","Type":"ContainerStarted","Data":"238d499d847b28466ee9c689cab4b6a4c0554044b0a29be8a4b2488d301ce6e0"} Dec 09 10:17:20 crc kubenswrapper[4786]: I1209 10:17:20.633073 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.6874843950000002 podStartE2EDuration="3.633048195s" podCreationTimestamp="2025-12-09 10:17:17 +0000 UTC" firstStartedPulling="2025-12-09 10:17:18.773896768 +0000 UTC m=+5604.657517994" lastFinishedPulling="2025-12-09 10:17:19.719460568 +0000 UTC m=+5605.603081794" observedRunningTime="2025-12-09 10:17:20.624903696 +0000 UTC m=+5606.508524922" watchObservedRunningTime="2025-12-09 10:17:20.633048195 +0000 UTC m=+5606.516669421" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.286503 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8j4df/must-gather-nb6v5"] Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.289503 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.299682 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8j4df/must-gather-nb6v5"] Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.314255 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnntk\" (UniqueName: \"kubernetes.io/projected/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-kube-api-access-pnntk\") pod \"must-gather-nb6v5\" (UID: \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\") " pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.314305 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-must-gather-output\") pod \"must-gather-nb6v5\" (UID: \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\") " pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.319407 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8j4df"/"openshift-service-ca.crt" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.319828 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-8j4df"/"kube-root-ca.crt" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.321296 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-8j4df"/"default-dockercfg-wzxh8" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.417163 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnntk\" (UniqueName: \"kubernetes.io/projected/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-kube-api-access-pnntk\") pod \"must-gather-nb6v5\" (UID: \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\") " pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.417781 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-must-gather-output\") pod \"must-gather-nb6v5\" (UID: \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\") " pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.418355 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-must-gather-output\") pod \"must-gather-nb6v5\" (UID: \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\") " pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.442572 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnntk\" (UniqueName: \"kubernetes.io/projected/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-kube-api-access-pnntk\") pod \"must-gather-nb6v5\" (UID: \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\") " pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:17:45 crc kubenswrapper[4786]: I1209 10:17:45.639232 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:17:46 crc kubenswrapper[4786]: I1209 10:17:46.160724 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-8j4df/must-gather-nb6v5"] Dec 09 10:17:46 crc kubenswrapper[4786]: I1209 10:17:46.859061 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/must-gather-nb6v5" event={"ID":"acd85c6c-5868-4f57-9e9b-f7e9ba510a33","Type":"ContainerStarted","Data":"4167c3d4409a1b49ca5f9d7005ef237668fc2ab859559d1daf11f7a95caa78a1"} Dec 09 10:17:52 crc kubenswrapper[4786]: I1209 10:17:52.923609 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/must-gather-nb6v5" event={"ID":"acd85c6c-5868-4f57-9e9b-f7e9ba510a33","Type":"ContainerStarted","Data":"b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a"} Dec 09 10:17:53 crc kubenswrapper[4786]: I1209 10:17:53.945603 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/must-gather-nb6v5" event={"ID":"acd85c6c-5868-4f57-9e9b-f7e9ba510a33","Type":"ContainerStarted","Data":"082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2"} Dec 09 10:17:53 crc kubenswrapper[4786]: I1209 10:17:53.971694 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8j4df/must-gather-nb6v5" podStartSLOduration=2.5042056539999997 podStartE2EDuration="8.971673097s" podCreationTimestamp="2025-12-09 10:17:45 +0000 UTC" firstStartedPulling="2025-12-09 10:17:46.16774769 +0000 UTC m=+5632.051368916" lastFinishedPulling="2025-12-09 10:17:52.635215133 +0000 UTC m=+5638.518836359" observedRunningTime="2025-12-09 10:17:53.960266818 +0000 UTC m=+5639.843888044" watchObservedRunningTime="2025-12-09 10:17:53.971673097 +0000 UTC m=+5639.855294323" Dec 09 10:17:56 crc kubenswrapper[4786]: I1209 10:17:56.951852 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8j4df/crc-debug-gc2vw"] Dec 09 10:17:56 crc kubenswrapper[4786]: I1209 10:17:56.954394 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:17:57 crc kubenswrapper[4786]: I1209 10:17:57.003520 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-host\") pod \"crc-debug-gc2vw\" (UID: \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\") " pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:17:57 crc kubenswrapper[4786]: I1209 10:17:57.003583 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5c4x\" (UniqueName: \"kubernetes.io/projected/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-kube-api-access-d5c4x\") pod \"crc-debug-gc2vw\" (UID: \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\") " pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:17:57 crc kubenswrapper[4786]: I1209 10:17:57.105879 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-host\") pod \"crc-debug-gc2vw\" (UID: \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\") " pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:17:57 crc kubenswrapper[4786]: I1209 10:17:57.105931 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5c4x\" (UniqueName: \"kubernetes.io/projected/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-kube-api-access-d5c4x\") pod \"crc-debug-gc2vw\" (UID: \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\") " pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:17:57 crc kubenswrapper[4786]: I1209 10:17:57.106025 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-host\") pod \"crc-debug-gc2vw\" (UID: \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\") " pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:17:57 crc kubenswrapper[4786]: I1209 10:17:57.131342 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5c4x\" (UniqueName: \"kubernetes.io/projected/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-kube-api-access-d5c4x\") pod \"crc-debug-gc2vw\" (UID: \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\") " pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:17:57 crc kubenswrapper[4786]: I1209 10:17:57.276159 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:17:57 crc kubenswrapper[4786]: W1209 10:17:57.325881 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3f9b07c7_10a0_46d0_a0b4_e6956b8d303c.slice/crio-4bc8b3d35dfee21036ee4082712c71ab300352260dfe104f7e621343cff16692 WatchSource:0}: Error finding container 4bc8b3d35dfee21036ee4082712c71ab300352260dfe104f7e621343cff16692: Status 404 returned error can't find the container with id 4bc8b3d35dfee21036ee4082712c71ab300352260dfe104f7e621343cff16692 Dec 09 10:17:58 crc kubenswrapper[4786]: I1209 10:17:58.005636 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/crc-debug-gc2vw" event={"ID":"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c","Type":"ContainerStarted","Data":"4bc8b3d35dfee21036ee4082712c71ab300352260dfe104f7e621343cff16692"} Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.613999 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5lv4l"] Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.617586 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.624108 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lv4l"] Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.772460 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs2wg\" (UniqueName: \"kubernetes.io/projected/fb9bd706-f75d-4203-b057-f5c9a399f862-kube-api-access-qs2wg\") pod \"redhat-marketplace-5lv4l\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.773152 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-catalog-content\") pod \"redhat-marketplace-5lv4l\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.773509 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-utilities\") pod \"redhat-marketplace-5lv4l\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.875282 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs2wg\" (UniqueName: \"kubernetes.io/projected/fb9bd706-f75d-4203-b057-f5c9a399f862-kube-api-access-qs2wg\") pod \"redhat-marketplace-5lv4l\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.875344 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-catalog-content\") pod \"redhat-marketplace-5lv4l\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.875409 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-utilities\") pod \"redhat-marketplace-5lv4l\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.876157 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-utilities\") pod \"redhat-marketplace-5lv4l\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.876776 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-catalog-content\") pod \"redhat-marketplace-5lv4l\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:06 crc kubenswrapper[4786]: I1209 10:18:06.950314 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs2wg\" (UniqueName: \"kubernetes.io/projected/fb9bd706-f75d-4203-b057-f5c9a399f862-kube-api-access-qs2wg\") pod \"redhat-marketplace-5lv4l\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:07 crc kubenswrapper[4786]: I1209 10:18:07.005760 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:10 crc kubenswrapper[4786]: I1209 10:18:10.720103 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lv4l"] Dec 09 10:18:11 crc kubenswrapper[4786]: I1209 10:18:11.197332 4786 generic.go:334] "Generic (PLEG): container finished" podID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerID="ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29" exitCode=0 Dec 09 10:18:11 crc kubenswrapper[4786]: I1209 10:18:11.200936 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/crc-debug-gc2vw" event={"ID":"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c","Type":"ContainerStarted","Data":"a7a1d72c20d3d85cdfdb92e575f8e40b1631b34c163e4da4b707948f36e961e0"} Dec 09 10:18:11 crc kubenswrapper[4786]: I1209 10:18:11.200973 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lv4l" event={"ID":"fb9bd706-f75d-4203-b057-f5c9a399f862","Type":"ContainerDied","Data":"ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29"} Dec 09 10:18:11 crc kubenswrapper[4786]: I1209 10:18:11.200986 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lv4l" event={"ID":"fb9bd706-f75d-4203-b057-f5c9a399f862","Type":"ContainerStarted","Data":"532fe9218391b75db57559b769009f5b905b0c1f070eef92f2a3e352ac68f77b"} Dec 09 10:18:11 crc kubenswrapper[4786]: I1209 10:18:11.209158 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-8j4df/crc-debug-gc2vw" podStartSLOduration=2.424564985 podStartE2EDuration="15.209135588s" podCreationTimestamp="2025-12-09 10:17:56 +0000 UTC" firstStartedPulling="2025-12-09 10:17:57.328928388 +0000 UTC m=+5643.212549614" lastFinishedPulling="2025-12-09 10:18:10.113498991 +0000 UTC m=+5655.997120217" observedRunningTime="2025-12-09 10:18:11.208334429 +0000 UTC m=+5657.091955655" watchObservedRunningTime="2025-12-09 10:18:11.209135588 +0000 UTC m=+5657.092756814" Dec 09 10:18:12 crc kubenswrapper[4786]: I1209 10:18:12.207532 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lv4l" event={"ID":"fb9bd706-f75d-4203-b057-f5c9a399f862","Type":"ContainerStarted","Data":"242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb"} Dec 09 10:18:13 crc kubenswrapper[4786]: I1209 10:18:13.221339 4786 generic.go:334] "Generic (PLEG): container finished" podID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerID="242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb" exitCode=0 Dec 09 10:18:13 crc kubenswrapper[4786]: I1209 10:18:13.221667 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lv4l" event={"ID":"fb9bd706-f75d-4203-b057-f5c9a399f862","Type":"ContainerDied","Data":"242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb"} Dec 09 10:18:15 crc kubenswrapper[4786]: I1209 10:18:15.246802 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lv4l" event={"ID":"fb9bd706-f75d-4203-b057-f5c9a399f862","Type":"ContainerStarted","Data":"05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7"} Dec 09 10:18:15 crc kubenswrapper[4786]: I1209 10:18:15.281945 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5lv4l" podStartSLOduration=6.313431667 podStartE2EDuration="9.281924846s" podCreationTimestamp="2025-12-09 10:18:06 +0000 UTC" firstStartedPulling="2025-12-09 10:18:11.199177475 +0000 UTC m=+5657.082798701" lastFinishedPulling="2025-12-09 10:18:14.167670654 +0000 UTC m=+5660.051291880" observedRunningTime="2025-12-09 10:18:15.266157401 +0000 UTC m=+5661.149778637" watchObservedRunningTime="2025-12-09 10:18:15.281924846 +0000 UTC m=+5661.165546072" Dec 09 10:18:17 crc kubenswrapper[4786]: I1209 10:18:17.006601 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:17 crc kubenswrapper[4786]: I1209 10:18:17.006896 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:17 crc kubenswrapper[4786]: I1209 10:18:17.068759 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:27 crc kubenswrapper[4786]: I1209 10:18:27.075728 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:27 crc kubenswrapper[4786]: I1209 10:18:27.135961 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lv4l"] Dec 09 10:18:27 crc kubenswrapper[4786]: I1209 10:18:27.365129 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5lv4l" podUID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerName="registry-server" containerID="cri-o://05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7" gracePeriod=2 Dec 09 10:18:27 crc kubenswrapper[4786]: I1209 10:18:27.935236 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.094507 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs2wg\" (UniqueName: \"kubernetes.io/projected/fb9bd706-f75d-4203-b057-f5c9a399f862-kube-api-access-qs2wg\") pod \"fb9bd706-f75d-4203-b057-f5c9a399f862\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.094644 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-catalog-content\") pod \"fb9bd706-f75d-4203-b057-f5c9a399f862\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.094699 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-utilities\") pod \"fb9bd706-f75d-4203-b057-f5c9a399f862\" (UID: \"fb9bd706-f75d-4203-b057-f5c9a399f862\") " Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.095932 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-utilities" (OuterVolumeSpecName: "utilities") pod "fb9bd706-f75d-4203-b057-f5c9a399f862" (UID: "fb9bd706-f75d-4203-b057-f5c9a399f862"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.104449 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb9bd706-f75d-4203-b057-f5c9a399f862-kube-api-access-qs2wg" (OuterVolumeSpecName: "kube-api-access-qs2wg") pod "fb9bd706-f75d-4203-b057-f5c9a399f862" (UID: "fb9bd706-f75d-4203-b057-f5c9a399f862"). InnerVolumeSpecName "kube-api-access-qs2wg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.124985 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fb9bd706-f75d-4203-b057-f5c9a399f862" (UID: "fb9bd706-f75d-4203-b057-f5c9a399f862"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.197353 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.197679 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs2wg\" (UniqueName: \"kubernetes.io/projected/fb9bd706-f75d-4203-b057-f5c9a399f862-kube-api-access-qs2wg\") on node \"crc\" DevicePath \"\"" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.197691 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb9bd706-f75d-4203-b057-f5c9a399f862-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.375373 4786 generic.go:334] "Generic (PLEG): container finished" podID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerID="05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7" exitCode=0 Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.375587 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lv4l" event={"ID":"fb9bd706-f75d-4203-b057-f5c9a399f862","Type":"ContainerDied","Data":"05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7"} Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.375719 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5lv4l" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.375833 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5lv4l" event={"ID":"fb9bd706-f75d-4203-b057-f5c9a399f862","Type":"ContainerDied","Data":"532fe9218391b75db57559b769009f5b905b0c1f070eef92f2a3e352ac68f77b"} Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.375920 4786 scope.go:117] "RemoveContainer" containerID="05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.429159 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lv4l"] Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.447555 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5lv4l"] Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.669963 4786 scope.go:117] "RemoveContainer" containerID="242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.733235 4786 scope.go:117] "RemoveContainer" containerID="ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.779470 4786 scope.go:117] "RemoveContainer" containerID="05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7" Dec 09 10:18:28 crc kubenswrapper[4786]: E1209 10:18:28.779845 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7\": container with ID starting with 05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7 not found: ID does not exist" containerID="05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.779884 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7"} err="failed to get container status \"05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7\": rpc error: code = NotFound desc = could not find container \"05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7\": container with ID starting with 05616faff5f2daa09d0f74f6accc637eec8739100f83651f7f52ef06446508a7 not found: ID does not exist" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.779907 4786 scope.go:117] "RemoveContainer" containerID="242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb" Dec 09 10:18:28 crc kubenswrapper[4786]: E1209 10:18:28.780230 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb\": container with ID starting with 242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb not found: ID does not exist" containerID="242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.780268 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb"} err="failed to get container status \"242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb\": rpc error: code = NotFound desc = could not find container \"242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb\": container with ID starting with 242d9bcb8d7d6d20835a2a99c33568a43de5d1de525f0b5b7c0d3407af0c61cb not found: ID does not exist" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.780282 4786 scope.go:117] "RemoveContainer" containerID="ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29" Dec 09 10:18:28 crc kubenswrapper[4786]: E1209 10:18:28.780552 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29\": container with ID starting with ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29 not found: ID does not exist" containerID="ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29" Dec 09 10:18:28 crc kubenswrapper[4786]: I1209 10:18:28.780572 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29"} err="failed to get container status \"ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29\": rpc error: code = NotFound desc = could not find container \"ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29\": container with ID starting with ea3e15898891b85de11a5fe5ecd5392ea7581e9b68df661014e16591579ece29 not found: ID does not exist" Dec 09 10:18:29 crc kubenswrapper[4786]: I1209 10:18:29.200583 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb9bd706-f75d-4203-b057-f5c9a399f862" path="/var/lib/kubelet/pods/fb9bd706-f75d-4203-b057-f5c9a399f862/volumes" Dec 09 10:19:02 crc kubenswrapper[4786]: I1209 10:19:02.850755 4786 generic.go:334] "Generic (PLEG): container finished" podID="3f9b07c7-10a0-46d0-a0b4-e6956b8d303c" containerID="a7a1d72c20d3d85cdfdb92e575f8e40b1631b34c163e4da4b707948f36e961e0" exitCode=0 Dec 09 10:19:02 crc kubenswrapper[4786]: I1209 10:19:02.852099 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/crc-debug-gc2vw" event={"ID":"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c","Type":"ContainerDied","Data":"a7a1d72c20d3d85cdfdb92e575f8e40b1631b34c163e4da4b707948f36e961e0"} Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.022447 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.056788 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8j4df/crc-debug-gc2vw"] Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.065770 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8j4df/crc-debug-gc2vw"] Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.158117 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5c4x\" (UniqueName: \"kubernetes.io/projected/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-kube-api-access-d5c4x\") pod \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\" (UID: \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\") " Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.159732 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-host\") pod \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\" (UID: \"3f9b07c7-10a0-46d0-a0b4-e6956b8d303c\") " Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.160011 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-host" (OuterVolumeSpecName: "host") pod "3f9b07c7-10a0-46d0-a0b4-e6956b8d303c" (UID: "3f9b07c7-10a0-46d0-a0b4-e6956b8d303c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.160583 4786 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-host\") on node \"crc\" DevicePath \"\"" Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.170558 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-kube-api-access-d5c4x" (OuterVolumeSpecName: "kube-api-access-d5c4x") pod "3f9b07c7-10a0-46d0-a0b4-e6956b8d303c" (UID: "3f9b07c7-10a0-46d0-a0b4-e6956b8d303c"). InnerVolumeSpecName "kube-api-access-d5c4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.262409 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5c4x\" (UniqueName: \"kubernetes.io/projected/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c-kube-api-access-d5c4x\") on node \"crc\" DevicePath \"\"" Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.873148 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bc8b3d35dfee21036ee4082712c71ab300352260dfe104f7e621343cff16692" Dec 09 10:19:04 crc kubenswrapper[4786]: I1209 10:19:04.873201 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-gc2vw" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.207279 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3f9b07c7-10a0-46d0-a0b4-e6956b8d303c" path="/var/lib/kubelet/pods/3f9b07c7-10a0-46d0-a0b4-e6956b8d303c/volumes" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.219039 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8j4df/crc-debug-fd6kl"] Dec 09 10:19:05 crc kubenswrapper[4786]: E1209 10:19:05.219499 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerName="extract-content" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.219518 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerName="extract-content" Dec 09 10:19:05 crc kubenswrapper[4786]: E1209 10:19:05.219536 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3f9b07c7-10a0-46d0-a0b4-e6956b8d303c" containerName="container-00" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.219543 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f9b07c7-10a0-46d0-a0b4-e6956b8d303c" containerName="container-00" Dec 09 10:19:05 crc kubenswrapper[4786]: E1209 10:19:05.219568 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerName="extract-utilities" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.219575 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerName="extract-utilities" Dec 09 10:19:05 crc kubenswrapper[4786]: E1209 10:19:05.219600 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerName="registry-server" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.219606 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerName="registry-server" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.219787 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb9bd706-f75d-4203-b057-f5c9a399f862" containerName="registry-server" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.219804 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3f9b07c7-10a0-46d0-a0b4-e6956b8d303c" containerName="container-00" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.220529 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.281895 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18529646-1ab2-47a0-adc4-7c3fa600b8be-host\") pod \"crc-debug-fd6kl\" (UID: \"18529646-1ab2-47a0-adc4-7c3fa600b8be\") " pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.282159 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttl2\" (UniqueName: \"kubernetes.io/projected/18529646-1ab2-47a0-adc4-7c3fa600b8be-kube-api-access-8ttl2\") pod \"crc-debug-fd6kl\" (UID: \"18529646-1ab2-47a0-adc4-7c3fa600b8be\") " pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.383767 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ttl2\" (UniqueName: \"kubernetes.io/projected/18529646-1ab2-47a0-adc4-7c3fa600b8be-kube-api-access-8ttl2\") pod \"crc-debug-fd6kl\" (UID: \"18529646-1ab2-47a0-adc4-7c3fa600b8be\") " pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.384500 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18529646-1ab2-47a0-adc4-7c3fa600b8be-host\") pod \"crc-debug-fd6kl\" (UID: \"18529646-1ab2-47a0-adc4-7c3fa600b8be\") " pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.384697 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18529646-1ab2-47a0-adc4-7c3fa600b8be-host\") pod \"crc-debug-fd6kl\" (UID: \"18529646-1ab2-47a0-adc4-7c3fa600b8be\") " pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.411622 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ttl2\" (UniqueName: \"kubernetes.io/projected/18529646-1ab2-47a0-adc4-7c3fa600b8be-kube-api-access-8ttl2\") pod \"crc-debug-fd6kl\" (UID: \"18529646-1ab2-47a0-adc4-7c3fa600b8be\") " pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.541997 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:05 crc kubenswrapper[4786]: I1209 10:19:05.883372 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/crc-debug-fd6kl" event={"ID":"18529646-1ab2-47a0-adc4-7c3fa600b8be","Type":"ContainerStarted","Data":"8e4872e2e5d68424377ecea9aa4e60905d0aec95f81bda527cfeb28fd0eabd51"} Dec 09 10:19:06 crc kubenswrapper[4786]: I1209 10:19:06.894395 4786 generic.go:334] "Generic (PLEG): container finished" podID="18529646-1ab2-47a0-adc4-7c3fa600b8be" containerID="37ccffca5b57091e2ed5b1f0ad2fb0e8e9fd52e59410564951692ea008ee0aa1" exitCode=0 Dec 09 10:19:06 crc kubenswrapper[4786]: I1209 10:19:06.894470 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/crc-debug-fd6kl" event={"ID":"18529646-1ab2-47a0-adc4-7c3fa600b8be","Type":"ContainerDied","Data":"37ccffca5b57091e2ed5b1f0ad2fb0e8e9fd52e59410564951692ea008ee0aa1"} Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.128560 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.181217 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ttl2\" (UniqueName: \"kubernetes.io/projected/18529646-1ab2-47a0-adc4-7c3fa600b8be-kube-api-access-8ttl2\") pod \"18529646-1ab2-47a0-adc4-7c3fa600b8be\" (UID: \"18529646-1ab2-47a0-adc4-7c3fa600b8be\") " Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.181390 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18529646-1ab2-47a0-adc4-7c3fa600b8be-host\") pod \"18529646-1ab2-47a0-adc4-7c3fa600b8be\" (UID: \"18529646-1ab2-47a0-adc4-7c3fa600b8be\") " Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.181991 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/18529646-1ab2-47a0-adc4-7c3fa600b8be-host" (OuterVolumeSpecName: "host") pod "18529646-1ab2-47a0-adc4-7c3fa600b8be" (UID: "18529646-1ab2-47a0-adc4-7c3fa600b8be"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.182654 4786 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/18529646-1ab2-47a0-adc4-7c3fa600b8be-host\") on node \"crc\" DevicePath \"\"" Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.211689 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18529646-1ab2-47a0-adc4-7c3fa600b8be-kube-api-access-8ttl2" (OuterVolumeSpecName: "kube-api-access-8ttl2") pod "18529646-1ab2-47a0-adc4-7c3fa600b8be" (UID: "18529646-1ab2-47a0-adc4-7c3fa600b8be"). InnerVolumeSpecName "kube-api-access-8ttl2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.284217 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ttl2\" (UniqueName: \"kubernetes.io/projected/18529646-1ab2-47a0-adc4-7c3fa600b8be-kube-api-access-8ttl2\") on node \"crc\" DevicePath \"\"" Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.954634 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/crc-debug-fd6kl" event={"ID":"18529646-1ab2-47a0-adc4-7c3fa600b8be","Type":"ContainerDied","Data":"8e4872e2e5d68424377ecea9aa4e60905d0aec95f81bda527cfeb28fd0eabd51"} Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.954880 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e4872e2e5d68424377ecea9aa4e60905d0aec95f81bda527cfeb28fd0eabd51" Dec 09 10:19:08 crc kubenswrapper[4786]: I1209 10:19:08.954691 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-fd6kl" Dec 09 10:19:09 crc kubenswrapper[4786]: I1209 10:19:09.266455 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8j4df/crc-debug-fd6kl"] Dec 09 10:19:09 crc kubenswrapper[4786]: I1209 10:19:09.280137 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8j4df/crc-debug-fd6kl"] Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.448517 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-8j4df/crc-debug-7tsx9"] Dec 09 10:19:10 crc kubenswrapper[4786]: E1209 10:19:10.448920 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18529646-1ab2-47a0-adc4-7c3fa600b8be" containerName="container-00" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.448932 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="18529646-1ab2-47a0-adc4-7c3fa600b8be" containerName="container-00" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.449162 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="18529646-1ab2-47a0-adc4-7c3fa600b8be" containerName="container-00" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.450085 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.529966 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0be41983-8494-494d-9e8d-95891fc99ddd-host\") pod \"crc-debug-7tsx9\" (UID: \"0be41983-8494-494d-9e8d-95891fc99ddd\") " pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.530172 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tmc8\" (UniqueName: \"kubernetes.io/projected/0be41983-8494-494d-9e8d-95891fc99ddd-kube-api-access-4tmc8\") pod \"crc-debug-7tsx9\" (UID: \"0be41983-8494-494d-9e8d-95891fc99ddd\") " pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.633955 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tmc8\" (UniqueName: \"kubernetes.io/projected/0be41983-8494-494d-9e8d-95891fc99ddd-kube-api-access-4tmc8\") pod \"crc-debug-7tsx9\" (UID: \"0be41983-8494-494d-9e8d-95891fc99ddd\") " pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.634343 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0be41983-8494-494d-9e8d-95891fc99ddd-host\") pod \"crc-debug-7tsx9\" (UID: \"0be41983-8494-494d-9e8d-95891fc99ddd\") " pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.634501 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0be41983-8494-494d-9e8d-95891fc99ddd-host\") pod \"crc-debug-7tsx9\" (UID: \"0be41983-8494-494d-9e8d-95891fc99ddd\") " pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.668419 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tmc8\" (UniqueName: \"kubernetes.io/projected/0be41983-8494-494d-9e8d-95891fc99ddd-kube-api-access-4tmc8\") pod \"crc-debug-7tsx9\" (UID: \"0be41983-8494-494d-9e8d-95891fc99ddd\") " pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.777328 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:10 crc kubenswrapper[4786]: I1209 10:19:10.992847 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/crc-debug-7tsx9" event={"ID":"0be41983-8494-494d-9e8d-95891fc99ddd","Type":"ContainerStarted","Data":"1d3cc721d34605485490a17f237b04a3e4b12a41f611eef4995f8504ea7a95a6"} Dec 09 10:19:11 crc kubenswrapper[4786]: I1209 10:19:11.207319 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18529646-1ab2-47a0-adc4-7c3fa600b8be" path="/var/lib/kubelet/pods/18529646-1ab2-47a0-adc4-7c3fa600b8be/volumes" Dec 09 10:19:12 crc kubenswrapper[4786]: I1209 10:19:12.002788 4786 generic.go:334] "Generic (PLEG): container finished" podID="0be41983-8494-494d-9e8d-95891fc99ddd" containerID="8b575914113a89de07b47ec9eed1818de0207dfd85ccc168c85f373b6b07c6c5" exitCode=0 Dec 09 10:19:12 crc kubenswrapper[4786]: I1209 10:19:12.002902 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/crc-debug-7tsx9" event={"ID":"0be41983-8494-494d-9e8d-95891fc99ddd","Type":"ContainerDied","Data":"8b575914113a89de07b47ec9eed1818de0207dfd85ccc168c85f373b6b07c6c5"} Dec 09 10:19:12 crc kubenswrapper[4786]: I1209 10:19:12.041038 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8j4df/crc-debug-7tsx9"] Dec 09 10:19:12 crc kubenswrapper[4786]: I1209 10:19:12.052947 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8j4df/crc-debug-7tsx9"] Dec 09 10:19:13 crc kubenswrapper[4786]: I1209 10:19:13.119732 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:13 crc kubenswrapper[4786]: I1209 10:19:13.215892 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tmc8\" (UniqueName: \"kubernetes.io/projected/0be41983-8494-494d-9e8d-95891fc99ddd-kube-api-access-4tmc8\") pod \"0be41983-8494-494d-9e8d-95891fc99ddd\" (UID: \"0be41983-8494-494d-9e8d-95891fc99ddd\") " Dec 09 10:19:13 crc kubenswrapper[4786]: I1209 10:19:13.216363 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0be41983-8494-494d-9e8d-95891fc99ddd-host\") pod \"0be41983-8494-494d-9e8d-95891fc99ddd\" (UID: \"0be41983-8494-494d-9e8d-95891fc99ddd\") " Dec 09 10:19:13 crc kubenswrapper[4786]: I1209 10:19:13.216413 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0be41983-8494-494d-9e8d-95891fc99ddd-host" (OuterVolumeSpecName: "host") pod "0be41983-8494-494d-9e8d-95891fc99ddd" (UID: "0be41983-8494-494d-9e8d-95891fc99ddd"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 10:19:13 crc kubenswrapper[4786]: I1209 10:19:13.217705 4786 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0be41983-8494-494d-9e8d-95891fc99ddd-host\") on node \"crc\" DevicePath \"\"" Dec 09 10:19:13 crc kubenswrapper[4786]: I1209 10:19:13.222131 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0be41983-8494-494d-9e8d-95891fc99ddd-kube-api-access-4tmc8" (OuterVolumeSpecName: "kube-api-access-4tmc8") pod "0be41983-8494-494d-9e8d-95891fc99ddd" (UID: "0be41983-8494-494d-9e8d-95891fc99ddd"). InnerVolumeSpecName "kube-api-access-4tmc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:19:13 crc kubenswrapper[4786]: I1209 10:19:13.319659 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tmc8\" (UniqueName: \"kubernetes.io/projected/0be41983-8494-494d-9e8d-95891fc99ddd-kube-api-access-4tmc8\") on node \"crc\" DevicePath \"\"" Dec 09 10:19:14 crc kubenswrapper[4786]: I1209 10:19:14.023031 4786 scope.go:117] "RemoveContainer" containerID="8b575914113a89de07b47ec9eed1818de0207dfd85ccc168c85f373b6b07c6c5" Dec 09 10:19:14 crc kubenswrapper[4786]: I1209 10:19:14.023228 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/crc-debug-7tsx9" Dec 09 10:19:15 crc kubenswrapper[4786]: I1209 10:19:15.199680 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0be41983-8494-494d-9e8d-95891fc99ddd" path="/var/lib/kubelet/pods/0be41983-8494-494d-9e8d-95891fc99ddd/volumes" Dec 09 10:19:19 crc kubenswrapper[4786]: I1209 10:19:19.009702 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-55bfbb9895-lchg7" podUID="025e29a5-c1a7-46fe-a47d-4b3248fd6320" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 09 10:19:24 crc kubenswrapper[4786]: I1209 10:19:24.988740 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:19:24 crc kubenswrapper[4786]: I1209 10:19:24.989189 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:19:40 crc kubenswrapper[4786]: I1209 10:19:40.550849 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69744ddc66-fp6bq_be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53/barbican-api/0.log" Dec 09 10:19:40 crc kubenswrapper[4786]: I1209 10:19:40.753943 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fc48d579d-jv5m2_59cf76dd-ccdd-4aff-b6ae-a86c532b922c/barbican-keystone-listener/0.log" Dec 09 10:19:40 crc kubenswrapper[4786]: I1209 10:19:40.777758 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69744ddc66-fp6bq_be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53/barbican-api-log/0.log" Dec 09 10:19:40 crc kubenswrapper[4786]: I1209 10:19:40.903796 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fc48d579d-jv5m2_59cf76dd-ccdd-4aff-b6ae-a86c532b922c/barbican-keystone-listener-log/0.log" Dec 09 10:19:40 crc kubenswrapper[4786]: I1209 10:19:40.967949 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76847f447c-24chp_515247e9-4278-40fa-b971-adb499dc3ce0/barbican-worker/0.log" Dec 09 10:19:41 crc kubenswrapper[4786]: I1209 10:19:41.108590 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76847f447c-24chp_515247e9-4278-40fa-b971-adb499dc3ce0/barbican-worker-log/0.log" Dec 09 10:19:41 crc kubenswrapper[4786]: I1209 10:19:41.171940 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz_ebb0da1f-f03a-4091-9057-2d250dd6bc07/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:41 crc kubenswrapper[4786]: I1209 10:19:41.368259 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cffa1372-a308-4145-a2ab-8e320fc5d296/ceilometer-central-agent/0.log" Dec 09 10:19:41 crc kubenswrapper[4786]: I1209 10:19:41.429825 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cffa1372-a308-4145-a2ab-8e320fc5d296/ceilometer-notification-agent/0.log" Dec 09 10:19:41 crc kubenswrapper[4786]: I1209 10:19:41.486160 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cffa1372-a308-4145-a2ab-8e320fc5d296/proxy-httpd/0.log" Dec 09 10:19:41 crc kubenswrapper[4786]: I1209 10:19:41.567443 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cffa1372-a308-4145-a2ab-8e320fc5d296/sg-core/0.log" Dec 09 10:19:42 crc kubenswrapper[4786]: I1209 10:19:42.008220 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb8b460d-7b22-4853-b592-ea61d203e5c1/cinder-api/0.log" Dec 09 10:19:42 crc kubenswrapper[4786]: I1209 10:19:42.057746 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb8b460d-7b22-4853-b592-ea61d203e5c1/cinder-api-log/0.log" Dec 09 10:19:42 crc kubenswrapper[4786]: I1209 10:19:42.339158 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e2b22ca8-e985-404e-af49-d7328d2d3017/probe/0.log" Dec 09 10:19:42 crc kubenswrapper[4786]: I1209 10:19:42.464147 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e2b22ca8-e985-404e-af49-d7328d2d3017/cinder-backup/0.log" Dec 09 10:19:42 crc kubenswrapper[4786]: I1209 10:19:42.464562 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0afc7d13-3b0f-4919-ab29-4d328c815a8a/cinder-scheduler/0.log" Dec 09 10:19:42 crc kubenswrapper[4786]: I1209 10:19:42.619324 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0afc7d13-3b0f-4919-ab29-4d328c815a8a/probe/0.log" Dec 09 10:19:42 crc kubenswrapper[4786]: I1209 10:19:42.745527 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_fc7497d0-e84b-4a17-8d33-b63bf384eee8/probe/0.log" Dec 09 10:19:42 crc kubenswrapper[4786]: I1209 10:19:42.882568 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_fc7497d0-e84b-4a17-8d33-b63bf384eee8/cinder-volume/0.log" Dec 09 10:19:43 crc kubenswrapper[4786]: I1209 10:19:43.016997 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df/probe/0.log" Dec 09 10:19:43 crc kubenswrapper[4786]: I1209 10:19:43.026251 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df/cinder-volume/0.log" Dec 09 10:19:43 crc kubenswrapper[4786]: I1209 10:19:43.336022 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-cks8r_e084b124-3f74-48a9-a0e4-6c9bea0d7875/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:43 crc kubenswrapper[4786]: I1209 10:19:43.342579 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-tx29k_e2ad540b-313c-4600-bf54-c14c9a6a2969/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:43 crc kubenswrapper[4786]: I1209 10:19:43.591039 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69b5dcdcbf-2dgn9_973019e1-5fa9-49b7-b291-fdd553108517/init/0.log" Dec 09 10:19:43 crc kubenswrapper[4786]: I1209 10:19:43.731361 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69b5dcdcbf-2dgn9_973019e1-5fa9-49b7-b291-fdd553108517/init/0.log" Dec 09 10:19:43 crc kubenswrapper[4786]: I1209 10:19:43.854531 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69b5dcdcbf-2dgn9_973019e1-5fa9-49b7-b291-fdd553108517/dnsmasq-dns/0.log" Dec 09 10:19:43 crc kubenswrapper[4786]: I1209 10:19:43.858990 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm_c31240e0-f612-4759-b933-3c2d89a10da3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:44 crc kubenswrapper[4786]: I1209 10:19:44.035279 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c137f18e-fc1e-42ac-a96b-c990c55664f7/glance-httpd/0.log" Dec 09 10:19:44 crc kubenswrapper[4786]: I1209 10:19:44.154380 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c137f18e-fc1e-42ac-a96b-c990c55664f7/glance-log/0.log" Dec 09 10:19:44 crc kubenswrapper[4786]: I1209 10:19:44.340747 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c7759b2f-fa10-4c57-845a-773289198d2e/glance-httpd/0.log" Dec 09 10:19:44 crc kubenswrapper[4786]: I1209 10:19:44.386979 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c7759b2f-fa10-4c57-845a-773289198d2e/glance-log/0.log" Dec 09 10:19:44 crc kubenswrapper[4786]: I1209 10:19:44.527462 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-655866bfb6-4l6wv_52a5cc59-1e73-4e04-ba05-80f0c364b351/horizon/0.log" Dec 09 10:19:45 crc kubenswrapper[4786]: I1209 10:19:45.033088 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2_a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:45 crc kubenswrapper[4786]: I1209 10:19:45.178896 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-bnlh7_d67c7c4e-faa8-427e-953b-829c4e277994/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:45 crc kubenswrapper[4786]: I1209 10:19:45.466818 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-655866bfb6-4l6wv_52a5cc59-1e73-4e04-ba05-80f0c364b351/horizon-log/0.log" Dec 09 10:19:45 crc kubenswrapper[4786]: I1209 10:19:45.637218 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bc97f477f-xr7xf_078ae2f6-b658-48a8-b4c2-cff5f3847bd3/keystone-api/0.log" Dec 09 10:19:45 crc kubenswrapper[4786]: I1209 10:19:45.790530 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29421241-g22g6_16354109-b784-40f5-b196-1f1972f99264/keystone-cron/0.log" Dec 09 10:19:45 crc kubenswrapper[4786]: I1209 10:19:45.816612 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e176f4e3-203b-4784-be15-d5e306723d08/kube-state-metrics/0.log" Dec 09 10:19:46 crc kubenswrapper[4786]: I1209 10:19:46.014449 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hslht_e0e78e74-699b-442e-a5bd-6c598b2e0fb4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:46 crc kubenswrapper[4786]: I1209 10:19:46.582805 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cb696f46f-55kzl_d41935c7-99f8-4d52-b0f4-691563bea9ee/neutron-httpd/0.log" Dec 09 10:19:46 crc kubenswrapper[4786]: I1209 10:19:46.592770 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn_17753443-b80a-43e1-9256-b7c0f392dad5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:46 crc kubenswrapper[4786]: I1209 10:19:46.634269 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cb696f46f-55kzl_d41935c7-99f8-4d52-b0f4-691563bea9ee/neutron-api/0.log" Dec 09 10:19:47 crc kubenswrapper[4786]: I1209 10:19:47.199280 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_93a4b0d1-a277-448d-bab3-03c7131f23bf/nova-cell0-conductor-conductor/0.log" Dec 09 10:19:47 crc kubenswrapper[4786]: I1209 10:19:47.643331 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4/nova-cell1-conductor-conductor/0.log" Dec 09 10:19:47 crc kubenswrapper[4786]: I1209 10:19:47.827280 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_349ccf59-f627-4673-84e7-215fc9d15e27/nova-cell1-novncproxy-novncproxy/0.log" Dec 09 10:19:48 crc kubenswrapper[4786]: I1209 10:19:48.188175 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lnqtd_f1d1af6a-883d-4d29-8d4d-b477e99c2df5/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:48 crc kubenswrapper[4786]: I1209 10:19:48.416962 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_052f7fa7-4f28-421a-a1a4-d262f5d8c2de/nova-api-log/0.log" Dec 09 10:19:48 crc kubenswrapper[4786]: I1209 10:19:48.574266 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8a5c44e4-7244-46f8-983b-de6cd923bd74/nova-metadata-log/0.log" Dec 09 10:19:48 crc kubenswrapper[4786]: I1209 10:19:48.818961 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_052f7fa7-4f28-421a-a1a4-d262f5d8c2de/nova-api-api/0.log" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.114015 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8066cc20-76cd-4a47-a662-fb77cd5cbe3b/mysql-bootstrap/0.log" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.194885 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_3c7e4255-6f20-4911-b8d9-862fb7b801da/nova-scheduler-scheduler/0.log" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.293529 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pk9rx"] Dec 09 10:19:49 crc kubenswrapper[4786]: E1209 10:19:49.294081 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0be41983-8494-494d-9e8d-95891fc99ddd" containerName="container-00" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.294102 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="0be41983-8494-494d-9e8d-95891fc99ddd" containerName="container-00" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.294318 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="0be41983-8494-494d-9e8d-95891fc99ddd" containerName="container-00" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.295990 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.311577 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pk9rx"] Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.440359 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qmml\" (UniqueName: \"kubernetes.io/projected/deb14f3e-f3e8-429f-adea-800d6d8d93f5-kube-api-access-5qmml\") pod \"certified-operators-pk9rx\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.440538 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-utilities\") pod \"certified-operators-pk9rx\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.440586 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-catalog-content\") pod \"certified-operators-pk9rx\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.542610 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-utilities\") pod \"certified-operators-pk9rx\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.542667 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-catalog-content\") pod \"certified-operators-pk9rx\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.542775 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qmml\" (UniqueName: \"kubernetes.io/projected/deb14f3e-f3e8-429f-adea-800d6d8d93f5-kube-api-access-5qmml\") pod \"certified-operators-pk9rx\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.543212 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-catalog-content\") pod \"certified-operators-pk9rx\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.543515 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-utilities\") pod \"certified-operators-pk9rx\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.580507 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qmml\" (UniqueName: \"kubernetes.io/projected/deb14f3e-f3e8-429f-adea-800d6d8d93f5-kube-api-access-5qmml\") pod \"certified-operators-pk9rx\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.666987 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.817935 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8066cc20-76cd-4a47-a662-fb77cd5cbe3b/mysql-bootstrap/0.log" Dec 09 10:19:49 crc kubenswrapper[4786]: I1209 10:19:49.842105 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8066cc20-76cd-4a47-a662-fb77cd5cbe3b/galera/0.log" Dec 09 10:19:50 crc kubenswrapper[4786]: I1209 10:19:50.253492 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_673b3525-c496-4268-b9f9-c37f5175efdc/mysql-bootstrap/0.log" Dec 09 10:19:50 crc kubenswrapper[4786]: I1209 10:19:50.387399 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pk9rx" event={"ID":"deb14f3e-f3e8-429f-adea-800d6d8d93f5","Type":"ContainerStarted","Data":"1e738058cf593322e56b3fc26d324870b9cd47046db81c2db0036cdb496175f6"} Dec 09 10:19:50 crc kubenswrapper[4786]: I1209 10:19:50.393178 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pk9rx"] Dec 09 10:19:50 crc kubenswrapper[4786]: I1209 10:19:50.772770 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_673b3525-c496-4268-b9f9-c37f5175efdc/mysql-bootstrap/0.log" Dec 09 10:19:50 crc kubenswrapper[4786]: I1209 10:19:50.843147 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_673b3525-c496-4268-b9f9-c37f5175efdc/galera/0.log" Dec 09 10:19:51 crc kubenswrapper[4786]: I1209 10:19:51.060601 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2423b332-8b9b-4a26-996b-582194eca3b7/openstackclient/0.log" Dec 09 10:19:51 crc kubenswrapper[4786]: I1209 10:19:51.239465 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-s6vc4_283c4e6d-aae7-4c99-97dd-9da311e7efd3/openstack-network-exporter/0.log" Dec 09 10:19:51 crc kubenswrapper[4786]: I1209 10:19:51.401270 4786 generic.go:334] "Generic (PLEG): container finished" podID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerID="b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c" exitCode=0 Dec 09 10:19:51 crc kubenswrapper[4786]: I1209 10:19:51.401581 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pk9rx" event={"ID":"deb14f3e-f3e8-429f-adea-800d6d8d93f5","Type":"ContainerDied","Data":"b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c"} Dec 09 10:19:51 crc kubenswrapper[4786]: I1209 10:19:51.404966 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 10:19:51 crc kubenswrapper[4786]: I1209 10:19:51.448881 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8a5c44e4-7244-46f8-983b-de6cd923bd74/nova-metadata-metadata/0.log" Dec 09 10:19:51 crc kubenswrapper[4786]: I1209 10:19:51.472046 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lqv95_df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0/ovsdb-server-init/0.log" Dec 09 10:19:51 crc kubenswrapper[4786]: I1209 10:19:51.800894 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lqv95_df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0/ovsdb-server-init/0.log" Dec 09 10:19:51 crc kubenswrapper[4786]: I1209 10:19:51.822975 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lqv95_df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0/ovsdb-server/0.log" Dec 09 10:19:52 crc kubenswrapper[4786]: I1209 10:19:52.028369 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vv7k4_7cad500b-e392-4774-a524-02587da67379/ovn-controller/0.log" Dec 09 10:19:52 crc kubenswrapper[4786]: I1209 10:19:52.156805 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lqv95_df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0/ovs-vswitchd/0.log" Dec 09 10:19:52 crc kubenswrapper[4786]: I1209 10:19:52.171500 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jhml4_38e0ab2d-0650-44b5-bc00-adeb40608783/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:52 crc kubenswrapper[4786]: I1209 10:19:52.412834 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pk9rx" event={"ID":"deb14f3e-f3e8-429f-adea-800d6d8d93f5","Type":"ContainerStarted","Data":"5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e"} Dec 09 10:19:52 crc kubenswrapper[4786]: I1209 10:19:52.499138 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f79a7868-1d59-4d6f-ac51-528c634a9b4f/ovn-northd/0.log" Dec 09 10:19:52 crc kubenswrapper[4786]: I1209 10:19:52.502133 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f79a7868-1d59-4d6f-ac51-528c634a9b4f/openstack-network-exporter/0.log" Dec 09 10:19:52 crc kubenswrapper[4786]: I1209 10:19:52.729347 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89d6bade-9172-4b73-8879-9f23d0834b93/openstack-network-exporter/0.log" Dec 09 10:19:52 crc kubenswrapper[4786]: I1209 10:19:52.907859 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f/openstack-network-exporter/0.log" Dec 09 10:19:53 crc kubenswrapper[4786]: I1209 10:19:53.011380 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89d6bade-9172-4b73-8879-9f23d0834b93/ovsdbserver-nb/0.log" Dec 09 10:19:53 crc kubenswrapper[4786]: I1209 10:19:53.034075 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f/ovsdbserver-sb/0.log" Dec 09 10:19:53 crc kubenswrapper[4786]: I1209 10:19:53.509378 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ccfbc9bd6-76jl9_2cec6ba1-ef2c-4cf9-881b-fdc57687c17c/placement-api/0.log" Dec 09 10:19:53 crc kubenswrapper[4786]: I1209 10:19:53.963996 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/init-config-reloader/0.log" Dec 09 10:19:53 crc kubenswrapper[4786]: I1209 10:19:53.975844 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/init-config-reloader/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.012298 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/config-reloader/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.101345 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ccfbc9bd6-76jl9_2cec6ba1-ef2c-4cf9-881b-fdc57687c17c/placement-log/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.260234 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/prometheus/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.374909 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/thanos-sidecar/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.403033 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f44944dd-abf7-402f-a3d4-93e17d0a760b/setup-container/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.453540 4786 generic.go:334] "Generic (PLEG): container finished" podID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerID="5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e" exitCode=0 Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.453583 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pk9rx" event={"ID":"deb14f3e-f3e8-429f-adea-800d6d8d93f5","Type":"ContainerDied","Data":"5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e"} Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.709109 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f44944dd-abf7-402f-a3d4-93e17d0a760b/setup-container/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.710087 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_77b48dc5-f201-422e-9983-368555119d75/memcached/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.716383 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c5ae8f0-bfa8-4fe2-81c3-289021674179/setup-container/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.778648 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f44944dd-abf7-402f-a3d4-93e17d0a760b/rabbitmq/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.989967 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c5ae8f0-bfa8-4fe2-81c3-289021674179/rabbitmq/0.log" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.991854 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.991905 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:19:54 crc kubenswrapper[4786]: I1209 10:19:54.998179 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c5ae8f0-bfa8-4fe2-81c3-289021674179/setup-container/0.log" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.034842 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_72675e09-5efb-4dc9-bc17-25b93ecf7537/setup-container/0.log" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.276481 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_72675e09-5efb-4dc9-bc17-25b93ecf7537/rabbitmq/0.log" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.296671 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_72675e09-5efb-4dc9-bc17-25b93ecf7537/setup-container/0.log" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.330861 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9_e5211467-fc31-4051-8e46-6b59e77d217b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.465933 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pk9rx" event={"ID":"deb14f3e-f3e8-429f-adea-800d6d8d93f5","Type":"ContainerStarted","Data":"fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00"} Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.494523 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pk9rx" podStartSLOduration=2.97492905 podStartE2EDuration="6.494502898s" podCreationTimestamp="2025-12-09 10:19:49 +0000 UTC" firstStartedPulling="2025-12-09 10:19:51.404654533 +0000 UTC m=+5757.288275769" lastFinishedPulling="2025-12-09 10:19:54.924228391 +0000 UTC m=+5760.807849617" observedRunningTime="2025-12-09 10:19:55.485900178 +0000 UTC m=+5761.369521414" watchObservedRunningTime="2025-12-09 10:19:55.494502898 +0000 UTC m=+5761.378124124" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.522549 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-hmb7n_eb43b8bf-02ae-4d5d-82f6-3262125035f1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.588416 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm_c88eeae2-e339-4f55-a0ee-c5fa8e611253/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.684389 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-pjlq8_a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.777665 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-swwrk_a1e77ab0-8d5d-421a-97df-bde0fa1abdfe/ssh-known-hosts-edpm-deployment/0.log" Dec 09 10:19:55 crc kubenswrapper[4786]: I1209 10:19:55.987702 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55bfbb9895-lchg7_025e29a5-c1a7-46fe-a47d-4b3248fd6320/proxy-server/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.004529 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55bfbb9895-lchg7_025e29a5-c1a7-46fe-a47d-4b3248fd6320/proxy-httpd/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.098045 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ph8nr_47aa56da-70ed-4ee4-a83c-35c8116a0ec3/swift-ring-rebalance/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.257085 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/account-auditor/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.268746 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/account-reaper/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.321017 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/account-server/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.377881 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/account-replicator/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.464448 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/container-auditor/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.564305 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/container-replicator/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.568317 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/container-updater/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.587186 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/container-server/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.686043 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-auditor/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.789389 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-expirer/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.834481 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-server/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.863404 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-replicator/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.872588 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-updater/0.log" Dec 09 10:19:56 crc kubenswrapper[4786]: I1209 10:19:56.977687 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/rsync/0.log" Dec 09 10:19:57 crc kubenswrapper[4786]: I1209 10:19:57.014854 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/swift-recon-cron/0.log" Dec 09 10:19:57 crc kubenswrapper[4786]: I1209 10:19:57.378049 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv_b1717b33-b022-49ed-94fc-2160247ac3bd/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:57 crc kubenswrapper[4786]: I1209 10:19:57.509842 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0112bf44-5116-4b72-a860-4fc091e5dc27/tempest-tests-tempest-tests-runner/0.log" Dec 09 10:19:57 crc kubenswrapper[4786]: I1209 10:19:57.569503 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0ffa3268-7c4f-4069-bc33-50db10708dce/test-operator-logs-container/0.log" Dec 09 10:19:57 crc kubenswrapper[4786]: I1209 10:19:57.702832 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg_32d7a244-874b-45f3-844a-402e668af86d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:19:58 crc kubenswrapper[4786]: I1209 10:19:58.484007 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_6c233b45-5e1c-4c8c-a3ba-d71a89838114/watcher-applier/0.log" Dec 09 10:19:59 crc kubenswrapper[4786]: I1209 10:19:59.067190 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_0d0cae5a-5b07-4046-9640-9734ea4e44c4/watcher-api-log/0.log" Dec 09 10:19:59 crc kubenswrapper[4786]: I1209 10:19:59.667192 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:59 crc kubenswrapper[4786]: I1209 10:19:59.668067 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:19:59 crc kubenswrapper[4786]: I1209 10:19:59.715248 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:20:00 crc kubenswrapper[4786]: I1209 10:20:00.573708 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:20:00 crc kubenswrapper[4786]: I1209 10:20:00.626808 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pk9rx"] Dec 09 10:20:01 crc kubenswrapper[4786]: I1209 10:20:01.350170 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_61ffd12d-d64c-461b-a3c1-271d523a8de6/watcher-decision-engine/0.log" Dec 09 10:20:02 crc kubenswrapper[4786]: I1209 10:20:02.207831 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_0d0cae5a-5b07-4046-9640-9734ea4e44c4/watcher-api/0.log" Dec 09 10:20:02 crc kubenswrapper[4786]: I1209 10:20:02.540492 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pk9rx" podUID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerName="registry-server" containerID="cri-o://fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00" gracePeriod=2 Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.047561 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.217520 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-catalog-content\") pod \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.217610 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-utilities\") pod \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.217824 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qmml\" (UniqueName: \"kubernetes.io/projected/deb14f3e-f3e8-429f-adea-800d6d8d93f5-kube-api-access-5qmml\") pod \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\" (UID: \"deb14f3e-f3e8-429f-adea-800d6d8d93f5\") " Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.218533 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-utilities" (OuterVolumeSpecName: "utilities") pod "deb14f3e-f3e8-429f-adea-800d6d8d93f5" (UID: "deb14f3e-f3e8-429f-adea-800d6d8d93f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.219517 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.237406 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deb14f3e-f3e8-429f-adea-800d6d8d93f5-kube-api-access-5qmml" (OuterVolumeSpecName: "kube-api-access-5qmml") pod "deb14f3e-f3e8-429f-adea-800d6d8d93f5" (UID: "deb14f3e-f3e8-429f-adea-800d6d8d93f5"). InnerVolumeSpecName "kube-api-access-5qmml". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.270065 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "deb14f3e-f3e8-429f-adea-800d6d8d93f5" (UID: "deb14f3e-f3e8-429f-adea-800d6d8d93f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.321262 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/deb14f3e-f3e8-429f-adea-800d6d8d93f5-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.321320 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qmml\" (UniqueName: \"kubernetes.io/projected/deb14f3e-f3e8-429f-adea-800d6d8d93f5-kube-api-access-5qmml\") on node \"crc\" DevicePath \"\"" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.553667 4786 generic.go:334] "Generic (PLEG): container finished" podID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerID="fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00" exitCode=0 Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.553717 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pk9rx" event={"ID":"deb14f3e-f3e8-429f-adea-800d6d8d93f5","Type":"ContainerDied","Data":"fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00"} Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.553748 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pk9rx" event={"ID":"deb14f3e-f3e8-429f-adea-800d6d8d93f5","Type":"ContainerDied","Data":"1e738058cf593322e56b3fc26d324870b9cd47046db81c2db0036cdb496175f6"} Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.553754 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pk9rx" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.553766 4786 scope.go:117] "RemoveContainer" containerID="fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.580332 4786 scope.go:117] "RemoveContainer" containerID="5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.611145 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pk9rx"] Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.630546 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pk9rx"] Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.645580 4786 scope.go:117] "RemoveContainer" containerID="b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.681650 4786 scope.go:117] "RemoveContainer" containerID="fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00" Dec 09 10:20:03 crc kubenswrapper[4786]: E1209 10:20:03.682492 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00\": container with ID starting with fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00 not found: ID does not exist" containerID="fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.682590 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00"} err="failed to get container status \"fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00\": rpc error: code = NotFound desc = could not find container \"fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00\": container with ID starting with fb9708de3d73274494c9266f573030794dc84d8eb5c664d57a356421ec674f00 not found: ID does not exist" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.682704 4786 scope.go:117] "RemoveContainer" containerID="5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e" Dec 09 10:20:03 crc kubenswrapper[4786]: E1209 10:20:03.683234 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e\": container with ID starting with 5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e not found: ID does not exist" containerID="5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.683310 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e"} err="failed to get container status \"5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e\": rpc error: code = NotFound desc = could not find container \"5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e\": container with ID starting with 5038edee34d1bcb28762eb3423ffbca81f4e297d77323b00c05acf6f5026785e not found: ID does not exist" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.683376 4786 scope.go:117] "RemoveContainer" containerID="b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c" Dec 09 10:20:03 crc kubenswrapper[4786]: E1209 10:20:03.684962 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c\": container with ID starting with b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c not found: ID does not exist" containerID="b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c" Dec 09 10:20:03 crc kubenswrapper[4786]: I1209 10:20:03.685062 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c"} err="failed to get container status \"b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c\": rpc error: code = NotFound desc = could not find container \"b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c\": container with ID starting with b285636be96b651896d8e2da2990224d9348d973143cc49024f72265f3d8de5c not found: ID does not exist" Dec 09 10:20:05 crc kubenswrapper[4786]: I1209 10:20:05.200084 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" path="/var/lib/kubelet/pods/deb14f3e-f3e8-429f-adea-800d6d8d93f5/volumes" Dec 09 10:20:24 crc kubenswrapper[4786]: I1209 10:20:24.988835 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:20:24 crc kubenswrapper[4786]: I1209 10:20:24.989371 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:20:24 crc kubenswrapper[4786]: I1209 10:20:24.989442 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 10:20:24 crc kubenswrapper[4786]: I1209 10:20:24.990301 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f0db13fa3b4c8b38759dcf670bd52efbefab22fe875583a9fcc3f80bfcbf0a33"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 10:20:24 crc kubenswrapper[4786]: I1209 10:20:24.990349 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://f0db13fa3b4c8b38759dcf670bd52efbefab22fe875583a9fcc3f80bfcbf0a33" gracePeriod=600 Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.069550 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/util/0.log" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.296575 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/pull/0.log" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.345452 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/pull/0.log" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.353070 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/util/0.log" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.527281 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/util/0.log" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.548699 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/extract/0.log" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.578260 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/pull/0.log" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.759246 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="f0db13fa3b4c8b38759dcf670bd52efbefab22fe875583a9fcc3f80bfcbf0a33" exitCode=0 Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.759307 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"f0db13fa3b4c8b38759dcf670bd52efbefab22fe875583a9fcc3f80bfcbf0a33"} Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.759845 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062"} Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.759888 4786 scope.go:117] "RemoveContainer" containerID="420ef4c2df3db8917971bbcc33178a356bf08ed5726eddcadbba4252772309f3" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.824837 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-pldv5_0ebdf904-eeaa-4d7b-8f51-10e721a91538/kube-rbac-proxy/0.log" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.844264 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-pldv5_0ebdf904-eeaa-4d7b-8f51-10e721a91538/manager/0.log" Dec 09 10:20:25 crc kubenswrapper[4786]: I1209 10:20:25.872918 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-w7gzc_405fe0da-3e24-42cd-b73d-9d0cfe700614/kube-rbac-proxy/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.145075 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-6dlzn_2bd616d0-3367-48bb-94a5-a22302102b89/kube-rbac-proxy/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.168914 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-w7gzc_405fe0da-3e24-42cd-b73d-9d0cfe700614/manager/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.184081 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-6dlzn_2bd616d0-3367-48bb-94a5-a22302102b89/manager/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.373935 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-85fbd69fcd-q9mt5_ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736/kube-rbac-proxy/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.469514 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-85fbd69fcd-q9mt5_ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736/manager/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.590297 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-mft9w_f52a27b2-d045-4a4b-8fe5-0160004d9a5f/manager/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.694157 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-mft9w_f52a27b2-d045-4a4b-8fe5-0160004d9a5f/kube-rbac-proxy/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.702828 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-rphwz_2ebe7b51-643e-4700-bf2f-cbe9546ae563/kube-rbac-proxy/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.806780 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-rphwz_2ebe7b51-643e-4700-bf2f-cbe9546ae563/manager/0.log" Dec 09 10:20:26 crc kubenswrapper[4786]: I1209 10:20:26.924319 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6c55d8d69b-w52pz_d658b716-de31-47c0-a352-28f6260b0144/kube-rbac-proxy/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.104642 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6c55d8d69b-w52pz_d658b716-de31-47c0-a352-28f6260b0144/manager/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.178338 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-kk584_fd1844c2-cd01-475a-b2fa-e49c9223b7b4/manager/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.194526 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-kk584_fd1844c2-cd01-475a-b2fa-e49c9223b7b4/kube-rbac-proxy/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.329859 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-79cc9d59f5-lfmfw_9b7f6902-b444-48d4-b2d2-7342e62c8811/kube-rbac-proxy/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.447847 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-79cc9d59f5-lfmfw_9b7f6902-b444-48d4-b2d2-7342e62c8811/manager/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.522498 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5cbc8c7f96-9qxcn_12b96437-95ee-4267-8eb2-569b9a93ef8d/kube-rbac-proxy/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.614902 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5cbc8c7f96-9qxcn_12b96437-95ee-4267-8eb2-569b9a93ef8d/manager/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.685340 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-qqwdd_061bb0fd-451d-4d15-b979-a6ea9b833fb1/kube-rbac-proxy/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.747538 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-qqwdd_061bb0fd-451d-4d15-b979-a6ea9b833fb1/manager/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.848211 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-glr8m_62fae6d8-3c6a-403c-9cc6-463e41a0bbe7/kube-rbac-proxy/0.log" Dec 09 10:20:27 crc kubenswrapper[4786]: I1209 10:20:27.895895 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-glr8m_62fae6d8-3c6a-403c-9cc6-463e41a0bbe7/manager/0.log" Dec 09 10:20:28 crc kubenswrapper[4786]: I1209 10:20:28.083378 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-rrlrt_e551e183-3965-40da-88e6-bbbcd6e3cbe5/kube-rbac-proxy/0.log" Dec 09 10:20:28 crc kubenswrapper[4786]: I1209 10:20:28.189452 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-rrlrt_e551e183-3965-40da-88e6-bbbcd6e3cbe5/manager/0.log" Dec 09 10:20:28 crc kubenswrapper[4786]: I1209 10:20:28.251787 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-2ql48_c117d831-6ff8-4e04-833a-242c22702cc3/kube-rbac-proxy/0.log" Dec 09 10:20:28 crc kubenswrapper[4786]: I1209 10:20:28.286993 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-2ql48_c117d831-6ff8-4e04-833a-242c22702cc3/manager/0.log" Dec 09 10:20:28 crc kubenswrapper[4786]: I1209 10:20:28.399256 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-qkq7f_ac272f60-c2a5-41a3-a48b-e499e7717667/kube-rbac-proxy/0.log" Dec 09 10:20:28 crc kubenswrapper[4786]: I1209 10:20:28.457223 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-qkq7f_ac272f60-c2a5-41a3-a48b-e499e7717667/manager/0.log" Dec 09 10:20:28 crc kubenswrapper[4786]: I1209 10:20:28.590321 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9969bcdf-xb28j_672e5a98-1fd6-4667-9a55-6a84ea13d77c/kube-rbac-proxy/0.log" Dec 09 10:20:29 crc kubenswrapper[4786]: I1209 10:20:29.087809 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-65cff6ddb4-mkfzv_9415679e-cf70-4f02-aaf3-20aa363e9f86/kube-rbac-proxy/0.log" Dec 09 10:20:29 crc kubenswrapper[4786]: I1209 10:20:29.311606 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-65cff6ddb4-mkfzv_9415679e-cf70-4f02-aaf3-20aa363e9f86/operator/0.log" Dec 09 10:20:29 crc kubenswrapper[4786]: I1209 10:20:29.352281 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p2plv_66100669-25e6-457a-a856-d7f6ee39b124/registry-server/0.log" Dec 09 10:20:29 crc kubenswrapper[4786]: I1209 10:20:29.683185 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-pwfxf_2bc2193d-47f3-470a-a773-db2124fc8351/kube-rbac-proxy/0.log" Dec 09 10:20:29 crc kubenswrapper[4786]: I1209 10:20:29.704089 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-pwfxf_2bc2193d-47f3-470a-a773-db2124fc8351/manager/0.log" Dec 09 10:20:29 crc kubenswrapper[4786]: I1209 10:20:29.921385 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-4mg9x_cbea13a0-662c-4a51-9cfa-a0904713fc0f/kube-rbac-proxy/0.log" Dec 09 10:20:29 crc kubenswrapper[4786]: I1209 10:20:29.971858 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-4mg9x_cbea13a0-662c-4a51-9cfa-a0904713fc0f/manager/0.log" Dec 09 10:20:29 crc kubenswrapper[4786]: I1209 10:20:29.992392 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-744dg_2d179ee0-ed61-44f8-80e8-622ee7ed3876/operator/0.log" Dec 09 10:20:30 crc kubenswrapper[4786]: I1209 10:20:30.213325 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-8f6687c44-lfcn5_c0263a18-de54-4c70-9ef7-508d86abed06/kube-rbac-proxy/0.log" Dec 09 10:20:30 crc kubenswrapper[4786]: I1209 10:20:30.287910 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-8f6687c44-lfcn5_c0263a18-de54-4c70-9ef7-508d86abed06/manager/0.log" Dec 09 10:20:30 crc kubenswrapper[4786]: I1209 10:20:30.449149 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9969bcdf-xb28j_672e5a98-1fd6-4667-9a55-6a84ea13d77c/manager/0.log" Dec 09 10:20:30 crc kubenswrapper[4786]: I1209 10:20:30.457650 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-695797c565-h57cz_2db3dcee-6f5b-487e-b425-ec7be9530815/kube-rbac-proxy/0.log" Dec 09 10:20:30 crc kubenswrapper[4786]: I1209 10:20:30.588987 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-bb86466d8-m6pzg_7229805c-3f98-437c-a3fe-b4031a2b7fa6/kube-rbac-proxy/0.log" Dec 09 10:20:30 crc kubenswrapper[4786]: I1209 10:20:30.678225 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-695797c565-h57cz_2db3dcee-6f5b-487e-b425-ec7be9530815/manager/0.log" Dec 09 10:20:30 crc kubenswrapper[4786]: I1209 10:20:30.681452 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-bb86466d8-m6pzg_7229805c-3f98-437c-a3fe-b4031a2b7fa6/manager/0.log" Dec 09 10:20:30 crc kubenswrapper[4786]: I1209 10:20:30.734805 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5d7f5df9d6-kwmgc_c12be72a-ac87-4e8f-a061-b68b3f5cb115/kube-rbac-proxy/0.log" Dec 09 10:20:30 crc kubenswrapper[4786]: I1209 10:20:30.864133 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5d7f5df9d6-kwmgc_c12be72a-ac87-4e8f-a061-b68b3f5cb115/manager/0.log" Dec 09 10:20:48 crc kubenswrapper[4786]: I1209 10:20:48.372704 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wnjtx_667ac238-96a3-4f57-b308-d4d5693d40f2/control-plane-machine-set-operator/0.log" Dec 09 10:20:48 crc kubenswrapper[4786]: I1209 10:20:48.545399 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hp2jj_8101c95a-1629-4b71-b12e-0fa374c9b09a/kube-rbac-proxy/0.log" Dec 09 10:20:48 crc kubenswrapper[4786]: I1209 10:20:48.585462 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hp2jj_8101c95a-1629-4b71-b12e-0fa374c9b09a/machine-api-operator/0.log" Dec 09 10:21:01 crc kubenswrapper[4786]: I1209 10:21:01.835226 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-mhk9l_e590bb55-a521-4368-b048-ebc34e6dc46c/cert-manager-controller/0.log" Dec 09 10:21:02 crc kubenswrapper[4786]: I1209 10:21:02.015510 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-g22j7_4a583e5a-0f3b-496b-89d5-fe79f697b730/cert-manager-cainjector/0.log" Dec 09 10:21:02 crc kubenswrapper[4786]: I1209 10:21:02.085741 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-57g7f_5b2baefe-3aa8-48ec-b66a-173a0eb33c22/cert-manager-webhook/0.log" Dec 09 10:21:14 crc kubenswrapper[4786]: I1209 10:21:14.865036 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-z52fp_21a68ddc-31de-4083-ac88-bdf6ffd0afa7/nmstate-console-plugin/0.log" Dec 09 10:21:15 crc kubenswrapper[4786]: I1209 10:21:15.156986 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-cgcz2_d6bacb3d-0915-4228-979a-ea9b6d283ff7/nmstate-handler/0.log" Dec 09 10:21:15 crc kubenswrapper[4786]: I1209 10:21:15.163876 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-679bk_3d5f00bd-8538-4255-8012-736caf10840a/kube-rbac-proxy/0.log" Dec 09 10:21:15 crc kubenswrapper[4786]: I1209 10:21:15.216099 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-679bk_3d5f00bd-8538-4255-8012-736caf10840a/nmstate-metrics/0.log" Dec 09 10:21:15 crc kubenswrapper[4786]: I1209 10:21:15.439898 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-fvtvt_79114f82-3f7e-40ea-b197-051c986d3070/nmstate-operator/0.log" Dec 09 10:21:15 crc kubenswrapper[4786]: I1209 10:21:15.471562 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-gszq4_cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e/nmstate-webhook/0.log" Dec 09 10:21:30 crc kubenswrapper[4786]: I1209 10:21:30.684353 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-dtbr4_2ba38124-9926-44fd-b5c5-2adb47fd814a/kube-rbac-proxy/0.log" Dec 09 10:21:30 crc kubenswrapper[4786]: I1209 10:21:30.826246 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-dtbr4_2ba38124-9926-44fd-b5c5-2adb47fd814a/controller/0.log" Dec 09 10:21:30 crc kubenswrapper[4786]: I1209 10:21:30.947525 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-frr-files/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.166246 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-reloader/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.170132 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-frr-files/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.220768 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-metrics/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.262725 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-reloader/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.435796 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-frr-files/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.461901 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-reloader/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.576610 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-metrics/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.579058 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-metrics/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.689554 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-frr-files/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.749046 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-reloader/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.845009 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/controller/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.879502 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-metrics/0.log" Dec 09 10:21:31 crc kubenswrapper[4786]: I1209 10:21:31.985089 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/frr-metrics/0.log" Dec 09 10:21:32 crc kubenswrapper[4786]: I1209 10:21:32.103831 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/kube-rbac-proxy/0.log" Dec 09 10:21:32 crc kubenswrapper[4786]: I1209 10:21:32.139193 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/kube-rbac-proxy-frr/0.log" Dec 09 10:21:32 crc kubenswrapper[4786]: I1209 10:21:32.275906 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/reloader/0.log" Dec 09 10:21:32 crc kubenswrapper[4786]: I1209 10:21:32.400455 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-kcwjn_e7177936-18a2-4469-bf7b-cd9db745d93f/frr-k8s-webhook-server/0.log" Dec 09 10:21:32 crc kubenswrapper[4786]: I1209 10:21:32.773244 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7bd64dc485-knxdp_2415f03a-5796-4063-aa38-791dc0a76fec/manager/0.log" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.090349 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t5d4k"] Dec 09 10:21:33 crc kubenswrapper[4786]: E1209 10:21:33.090996 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerName="extract-content" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.091020 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerName="extract-content" Dec 09 10:21:33 crc kubenswrapper[4786]: E1209 10:21:33.091051 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerName="extract-utilities" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.091058 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerName="extract-utilities" Dec 09 10:21:33 crc kubenswrapper[4786]: E1209 10:21:33.091084 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerName="registry-server" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.091090 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerName="registry-server" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.091298 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="deb14f3e-f3e8-429f-adea-800d6d8d93f5" containerName="registry-server" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.093332 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.111037 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5d4k"] Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.125792 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-utilities\") pod \"redhat-operators-t5d4k\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.126087 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-catalog-content\") pod \"redhat-operators-t5d4k\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.126134 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqzgp\" (UniqueName: \"kubernetes.io/projected/63aec87d-0cb4-4adf-925b-56b2b7b08d93-kube-api-access-hqzgp\") pod \"redhat-operators-t5d4k\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.135385 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fpvm7_8a0660c9-5ef5-4ed7-a304-3690e32fb830/kube-rbac-proxy/0.log" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.166971 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5786b6d7bd-6ntsj_b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21/webhook-server/0.log" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.227963 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-catalog-content\") pod \"redhat-operators-t5d4k\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.228351 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqzgp\" (UniqueName: \"kubernetes.io/projected/63aec87d-0cb4-4adf-925b-56b2b7b08d93-kube-api-access-hqzgp\") pod \"redhat-operators-t5d4k\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.228404 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-utilities\") pod \"redhat-operators-t5d4k\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.228799 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-catalog-content\") pod \"redhat-operators-t5d4k\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.229607 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-utilities\") pod \"redhat-operators-t5d4k\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.254153 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqzgp\" (UniqueName: \"kubernetes.io/projected/63aec87d-0cb4-4adf-925b-56b2b7b08d93-kube-api-access-hqzgp\") pod \"redhat-operators-t5d4k\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.431481 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.785054 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/frr/0.log" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.877778 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fpvm7_8a0660c9-5ef5-4ed7-a304-3690e32fb830/speaker/0.log" Dec 09 10:21:33 crc kubenswrapper[4786]: I1209 10:21:33.940287 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t5d4k"] Dec 09 10:21:34 crc kubenswrapper[4786]: I1209 10:21:34.479232 4786 generic.go:334] "Generic (PLEG): container finished" podID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerID="8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a" exitCode=0 Dec 09 10:21:34 crc kubenswrapper[4786]: I1209 10:21:34.479333 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5d4k" event={"ID":"63aec87d-0cb4-4adf-925b-56b2b7b08d93","Type":"ContainerDied","Data":"8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a"} Dec 09 10:21:34 crc kubenswrapper[4786]: I1209 10:21:34.479571 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5d4k" event={"ID":"63aec87d-0cb4-4adf-925b-56b2b7b08d93","Type":"ContainerStarted","Data":"9230222cc2c818f82ae0d18ea67c669b752aec7c7316c9813365b39c08fa8b7c"} Dec 09 10:21:35 crc kubenswrapper[4786]: I1209 10:21:35.495278 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5d4k" event={"ID":"63aec87d-0cb4-4adf-925b-56b2b7b08d93","Type":"ContainerStarted","Data":"50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568"} Dec 09 10:21:37 crc kubenswrapper[4786]: I1209 10:21:37.517210 4786 generic.go:334] "Generic (PLEG): container finished" podID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerID="50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568" exitCode=0 Dec 09 10:21:37 crc kubenswrapper[4786]: I1209 10:21:37.517328 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5d4k" event={"ID":"63aec87d-0cb4-4adf-925b-56b2b7b08d93","Type":"ContainerDied","Data":"50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568"} Dec 09 10:21:39 crc kubenswrapper[4786]: I1209 10:21:39.540679 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5d4k" event={"ID":"63aec87d-0cb4-4adf-925b-56b2b7b08d93","Type":"ContainerStarted","Data":"be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab"} Dec 09 10:21:41 crc kubenswrapper[4786]: I1209 10:21:41.600947 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t5d4k" podStartSLOduration=4.336967875 podStartE2EDuration="8.600925755s" podCreationTimestamp="2025-12-09 10:21:33 +0000 UTC" firstStartedPulling="2025-12-09 10:21:34.481318523 +0000 UTC m=+5860.364939749" lastFinishedPulling="2025-12-09 10:21:38.745276403 +0000 UTC m=+5864.628897629" observedRunningTime="2025-12-09 10:21:41.585835846 +0000 UTC m=+5867.469457072" watchObservedRunningTime="2025-12-09 10:21:41.600925755 +0000 UTC m=+5867.484546981" Dec 09 10:21:43 crc kubenswrapper[4786]: I1209 10:21:43.432340 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:43 crc kubenswrapper[4786]: I1209 10:21:43.432700 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:44 crc kubenswrapper[4786]: I1209 10:21:44.492630 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t5d4k" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerName="registry-server" probeResult="failure" output=< Dec 09 10:21:44 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 10:21:44 crc kubenswrapper[4786]: > Dec 09 10:21:48 crc kubenswrapper[4786]: I1209 10:21:48.718627 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/util/0.log" Dec 09 10:21:48 crc kubenswrapper[4786]: I1209 10:21:48.924719 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/pull/0.log" Dec 09 10:21:48 crc kubenswrapper[4786]: I1209 10:21:48.961104 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/util/0.log" Dec 09 10:21:48 crc kubenswrapper[4786]: I1209 10:21:48.967927 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/pull/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.174651 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/extract/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.186530 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/pull/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.230767 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/util/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.360534 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/util/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.555288 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/util/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.592952 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/pull/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.610767 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/pull/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.766126 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/util/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.805977 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/pull/0.log" Dec 09 10:21:49 crc kubenswrapper[4786]: I1209 10:21:49.812812 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/extract/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.021199 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/util/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.236814 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/util/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.249115 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/pull/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.259929 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/pull/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.461392 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/pull/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.466620 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/util/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.505077 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/extract/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.649382 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-utilities/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.843495 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-utilities/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.874347 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-content/0.log" Dec 09 10:21:50 crc kubenswrapper[4786]: I1209 10:21:50.886263 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-content/0.log" Dec 09 10:21:51 crc kubenswrapper[4786]: I1209 10:21:51.088252 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-content/0.log" Dec 09 10:21:51 crc kubenswrapper[4786]: I1209 10:21:51.097770 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-utilities/0.log" Dec 09 10:21:51 crc kubenswrapper[4786]: I1209 10:21:51.330805 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-utilities/0.log" Dec 09 10:21:51 crc kubenswrapper[4786]: I1209 10:21:51.618700 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-content/0.log" Dec 09 10:21:51 crc kubenswrapper[4786]: I1209 10:21:51.666014 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-content/0.log" Dec 09 10:21:51 crc kubenswrapper[4786]: I1209 10:21:51.694019 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-utilities/0.log" Dec 09 10:21:51 crc kubenswrapper[4786]: I1209 10:21:51.841473 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-utilities/0.log" Dec 09 10:21:51 crc kubenswrapper[4786]: I1209 10:21:51.930006 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-content/0.log" Dec 09 10:21:51 crc kubenswrapper[4786]: I1209 10:21:51.950372 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/registry-server/0.log" Dec 09 10:21:52 crc kubenswrapper[4786]: I1209 10:21:52.229216 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-q75q9_a62ae8ec-1904-4074-9c5d-76d6bde47df8/marketplace-operator/0.log" Dec 09 10:21:52 crc kubenswrapper[4786]: I1209 10:21:52.398628 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-utilities/0.log" Dec 09 10:21:52 crc kubenswrapper[4786]: I1209 10:21:52.459178 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/registry-server/0.log" Dec 09 10:21:52 crc kubenswrapper[4786]: I1209 10:21:52.682725 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-utilities/0.log" Dec 09 10:21:52 crc kubenswrapper[4786]: I1209 10:21:52.706003 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-content/0.log" Dec 09 10:21:52 crc kubenswrapper[4786]: I1209 10:21:52.737208 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-content/0.log" Dec 09 10:21:52 crc kubenswrapper[4786]: I1209 10:21:52.931629 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-content/0.log" Dec 09 10:21:52 crc kubenswrapper[4786]: I1209 10:21:52.978049 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-utilities/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.147890 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-utilities/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.245445 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/registry-server/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.356107 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-content/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.360709 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-utilities/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.389575 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-content/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.488173 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.553767 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.562782 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-utilities/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.625759 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-content/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.648688 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5d4k_63aec87d-0cb4-4adf-925b-56b2b7b08d93/extract-utilities/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.729141 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5d4k"] Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.873518 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5d4k_63aec87d-0cb4-4adf-925b-56b2b7b08d93/extract-utilities/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.875182 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5d4k_63aec87d-0cb4-4adf-925b-56b2b7b08d93/extract-content/0.log" Dec 09 10:21:53 crc kubenswrapper[4786]: I1209 10:21:53.959047 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5d4k_63aec87d-0cb4-4adf-925b-56b2b7b08d93/extract-content/0.log" Dec 09 10:21:54 crc kubenswrapper[4786]: I1209 10:21:54.212855 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5d4k_63aec87d-0cb4-4adf-925b-56b2b7b08d93/extract-content/0.log" Dec 09 10:21:54 crc kubenswrapper[4786]: I1209 10:21:54.231438 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5d4k_63aec87d-0cb4-4adf-925b-56b2b7b08d93/extract-utilities/0.log" Dec 09 10:21:54 crc kubenswrapper[4786]: I1209 10:21:54.245779 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-t5d4k_63aec87d-0cb4-4adf-925b-56b2b7b08d93/registry-server/0.log" Dec 09 10:21:54 crc kubenswrapper[4786]: I1209 10:21:54.380383 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/registry-server/0.log" Dec 09 10:21:54 crc kubenswrapper[4786]: I1209 10:21:54.714163 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t5d4k" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerName="registry-server" containerID="cri-o://be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab" gracePeriod=2 Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.242680 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.263798 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-utilities\") pod \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.263873 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-catalog-content\") pod \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.264556 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-utilities" (OuterVolumeSpecName: "utilities") pod "63aec87d-0cb4-4adf-925b-56b2b7b08d93" (UID: "63aec87d-0cb4-4adf-925b-56b2b7b08d93"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.365660 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hqzgp\" (UniqueName: \"kubernetes.io/projected/63aec87d-0cb4-4adf-925b-56b2b7b08d93-kube-api-access-hqzgp\") pod \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\" (UID: \"63aec87d-0cb4-4adf-925b-56b2b7b08d93\") " Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.366558 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.375947 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63aec87d-0cb4-4adf-925b-56b2b7b08d93-kube-api-access-hqzgp" (OuterVolumeSpecName: "kube-api-access-hqzgp") pod "63aec87d-0cb4-4adf-925b-56b2b7b08d93" (UID: "63aec87d-0cb4-4adf-925b-56b2b7b08d93"). InnerVolumeSpecName "kube-api-access-hqzgp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.382525 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63aec87d-0cb4-4adf-925b-56b2b7b08d93" (UID: "63aec87d-0cb4-4adf-925b-56b2b7b08d93"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.468474 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63aec87d-0cb4-4adf-925b-56b2b7b08d93-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.468507 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hqzgp\" (UniqueName: \"kubernetes.io/projected/63aec87d-0cb4-4adf-925b-56b2b7b08d93-kube-api-access-hqzgp\") on node \"crc\" DevicePath \"\"" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.725211 4786 generic.go:334] "Generic (PLEG): container finished" podID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerID="be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab" exitCode=0 Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.725263 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5d4k" event={"ID":"63aec87d-0cb4-4adf-925b-56b2b7b08d93","Type":"ContainerDied","Data":"be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab"} Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.725331 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t5d4k" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.725343 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t5d4k" event={"ID":"63aec87d-0cb4-4adf-925b-56b2b7b08d93","Type":"ContainerDied","Data":"9230222cc2c818f82ae0d18ea67c669b752aec7c7316c9813365b39c08fa8b7c"} Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.725366 4786 scope.go:117] "RemoveContainer" containerID="be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.758729 4786 scope.go:117] "RemoveContainer" containerID="50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.770501 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t5d4k"] Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.780145 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t5d4k"] Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.806276 4786 scope.go:117] "RemoveContainer" containerID="8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.862667 4786 scope.go:117] "RemoveContainer" containerID="be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab" Dec 09 10:21:55 crc kubenswrapper[4786]: E1209 10:21:55.863340 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab\": container with ID starting with be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab not found: ID does not exist" containerID="be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.863405 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab"} err="failed to get container status \"be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab\": rpc error: code = NotFound desc = could not find container \"be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab\": container with ID starting with be4ff53de5dde67f2b22994fbb95198f2b6dff475fa167cdf0c1e1d57391c8ab not found: ID does not exist" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.863461 4786 scope.go:117] "RemoveContainer" containerID="50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568" Dec 09 10:21:55 crc kubenswrapper[4786]: E1209 10:21:55.863903 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568\": container with ID starting with 50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568 not found: ID does not exist" containerID="50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.863934 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568"} err="failed to get container status \"50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568\": rpc error: code = NotFound desc = could not find container \"50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568\": container with ID starting with 50f3613e6fc8b25369bedb8abfb035ab517a55fb8b12642b8fe5539917cea568 not found: ID does not exist" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.863952 4786 scope.go:117] "RemoveContainer" containerID="8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a" Dec 09 10:21:55 crc kubenswrapper[4786]: E1209 10:21:55.864397 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a\": container with ID starting with 8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a not found: ID does not exist" containerID="8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a" Dec 09 10:21:55 crc kubenswrapper[4786]: I1209 10:21:55.864509 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a"} err="failed to get container status \"8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a\": rpc error: code = NotFound desc = could not find container \"8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a\": container with ID starting with 8755fa66685ed6ec5256346eb66600cc6450d19a727aca86855e7641d46ea06a not found: ID does not exist" Dec 09 10:21:57 crc kubenswrapper[4786]: I1209 10:21:57.200723 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" path="/var/lib/kubelet/pods/63aec87d-0cb4-4adf-925b-56b2b7b08d93/volumes" Dec 09 10:22:06 crc kubenswrapper[4786]: I1209 10:22:06.478656 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-hwspd_5642938b-acf3-4128-83bb-ef2beeb1d85c/prometheus-operator/0.log" Dec 09 10:22:06 crc kubenswrapper[4786]: I1209 10:22:06.718443 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd87d88d-glsks_68535e7a-c972-4054-8849-58dedcf84cd0/prometheus-operator-admission-webhook/0.log" Dec 09 10:22:06 crc kubenswrapper[4786]: I1209 10:22:06.805935 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg_2c7d26aa-45ef-471d-bb48-671366e5928a/prometheus-operator-admission-webhook/0.log" Dec 09 10:22:06 crc kubenswrapper[4786]: I1209 10:22:06.983190 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-kgqqn_1d1ef0df-f7b0-4499-b5c3-f0952d78f097/operator/0.log" Dec 09 10:22:07 crc kubenswrapper[4786]: I1209 10:22:07.024167 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-2dt2t_19919157-d502-47f5-9ea6-27f27a0b6742/perses-operator/0.log" Dec 09 10:22:31 crc kubenswrapper[4786]: E1209 10:22:31.526497 4786 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.129.56.245:41710->38.129.56.245:34075: write tcp 38.129.56.245:41710->38.129.56.245:34075: write: broken pipe Dec 09 10:22:54 crc kubenswrapper[4786]: I1209 10:22:54.988762 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:22:54 crc kubenswrapper[4786]: I1209 10:22:54.989328 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:23:24 crc kubenswrapper[4786]: I1209 10:23:24.988720 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:23:24 crc kubenswrapper[4786]: I1209 10:23:24.989322 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:23:54 crc kubenswrapper[4786]: I1209 10:23:54.988584 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:23:54 crc kubenswrapper[4786]: I1209 10:23:54.989137 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:23:54 crc kubenswrapper[4786]: I1209 10:23:54.989189 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 10:23:54 crc kubenswrapper[4786]: I1209 10:23:54.989671 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 10:23:54 crc kubenswrapper[4786]: I1209 10:23:54.989716 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" gracePeriod=600 Dec 09 10:23:55 crc kubenswrapper[4786]: I1209 10:23:55.311006 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" exitCode=0 Dec 09 10:23:55 crc kubenswrapper[4786]: I1209 10:23:55.311188 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062"} Dec 09 10:23:55 crc kubenswrapper[4786]: I1209 10:23:55.311594 4786 scope.go:117] "RemoveContainer" containerID="f0db13fa3b4c8b38759dcf670bd52efbefab22fe875583a9fcc3f80bfcbf0a33" Dec 09 10:23:55 crc kubenswrapper[4786]: E1209 10:23:55.616293 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:23:56 crc kubenswrapper[4786]: I1209 10:23:56.326918 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:23:56 crc kubenswrapper[4786]: E1209 10:23:56.327653 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:24:09 crc kubenswrapper[4786]: I1209 10:24:09.188400 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:24:09 crc kubenswrapper[4786]: E1209 10:24:09.189443 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:24:19 crc kubenswrapper[4786]: I1209 10:24:19.590715 4786 generic.go:334] "Generic (PLEG): container finished" podID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" containerID="b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a" exitCode=0 Dec 09 10:24:19 crc kubenswrapper[4786]: I1209 10:24:19.590794 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-8j4df/must-gather-nb6v5" event={"ID":"acd85c6c-5868-4f57-9e9b-f7e9ba510a33","Type":"ContainerDied","Data":"b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a"} Dec 09 10:24:19 crc kubenswrapper[4786]: I1209 10:24:19.592295 4786 scope.go:117] "RemoveContainer" containerID="b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a" Dec 09 10:24:20 crc kubenswrapper[4786]: I1209 10:24:20.612133 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8j4df_must-gather-nb6v5_acd85c6c-5868-4f57-9e9b-f7e9ba510a33/gather/0.log" Dec 09 10:24:24 crc kubenswrapper[4786]: I1209 10:24:24.188499 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:24:24 crc kubenswrapper[4786]: E1209 10:24:24.189801 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.091239 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-8j4df/must-gather-nb6v5"] Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.092317 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-8j4df/must-gather-nb6v5" podUID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" containerName="copy" containerID="cri-o://082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2" gracePeriod=2 Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.103408 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-8j4df/must-gather-nb6v5"] Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.570871 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8j4df_must-gather-nb6v5_acd85c6c-5868-4f57-9e9b-f7e9ba510a33/copy/0.log" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.571622 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.601556 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnntk\" (UniqueName: \"kubernetes.io/projected/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-kube-api-access-pnntk\") pod \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\" (UID: \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\") " Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.601835 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-must-gather-output\") pod \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\" (UID: \"acd85c6c-5868-4f57-9e9b-f7e9ba510a33\") " Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.608609 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-kube-api-access-pnntk" (OuterVolumeSpecName: "kube-api-access-pnntk") pod "acd85c6c-5868-4f57-9e9b-f7e9ba510a33" (UID: "acd85c6c-5868-4f57-9e9b-f7e9ba510a33"). InnerVolumeSpecName "kube-api-access-pnntk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.703988 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnntk\" (UniqueName: \"kubernetes.io/projected/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-kube-api-access-pnntk\") on node \"crc\" DevicePath \"\"" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.721289 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-8j4df_must-gather-nb6v5_acd85c6c-5868-4f57-9e9b-f7e9ba510a33/copy/0.log" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.721819 4786 generic.go:334] "Generic (PLEG): container finished" podID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" containerID="082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2" exitCode=143 Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.721958 4786 scope.go:117] "RemoveContainer" containerID="082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.721935 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-8j4df/must-gather-nb6v5" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.746609 4786 scope.go:117] "RemoveContainer" containerID="b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.806679 4786 scope.go:117] "RemoveContainer" containerID="082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2" Dec 09 10:24:29 crc kubenswrapper[4786]: E1209 10:24:29.808360 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2\": container with ID starting with 082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2 not found: ID does not exist" containerID="082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.808462 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2"} err="failed to get container status \"082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2\": rpc error: code = NotFound desc = could not find container \"082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2\": container with ID starting with 082aa10e09f2612a6803cb1327e7918868337a309e4c00cc0923a9dd72423aa2 not found: ID does not exist" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.808500 4786 scope.go:117] "RemoveContainer" containerID="b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a" Dec 09 10:24:29 crc kubenswrapper[4786]: E1209 10:24:29.808828 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a\": container with ID starting with b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a not found: ID does not exist" containerID="b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.808851 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a"} err="failed to get container status \"b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a\": rpc error: code = NotFound desc = could not find container \"b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a\": container with ID starting with b3745b711546d6882be9a28795d832ad9277b4697f96beed123d263bd43e6e2a not found: ID does not exist" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.827821 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "acd85c6c-5868-4f57-9e9b-f7e9ba510a33" (UID: "acd85c6c-5868-4f57-9e9b-f7e9ba510a33"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:24:29 crc kubenswrapper[4786]: I1209 10:24:29.914576 4786 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/acd85c6c-5868-4f57-9e9b-f7e9ba510a33-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 10:24:31 crc kubenswrapper[4786]: I1209 10:24:31.199431 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" path="/var/lib/kubelet/pods/acd85c6c-5868-4f57-9e9b-f7e9ba510a33/volumes" Dec 09 10:24:36 crc kubenswrapper[4786]: I1209 10:24:36.189019 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:24:36 crc kubenswrapper[4786]: E1209 10:24:36.190069 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.188231 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:24:51 crc kubenswrapper[4786]: E1209 10:24:51.188999 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.553569 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bv7xk"] Dec 09 10:24:51 crc kubenswrapper[4786]: E1209 10:24:51.554620 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerName="registry-server" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.554717 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerName="registry-server" Dec 09 10:24:51 crc kubenswrapper[4786]: E1209 10:24:51.554796 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" containerName="gather" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.554859 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" containerName="gather" Dec 09 10:24:51 crc kubenswrapper[4786]: E1209 10:24:51.554976 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerName="extract-content" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.555044 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerName="extract-content" Dec 09 10:24:51 crc kubenswrapper[4786]: E1209 10:24:51.555116 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" containerName="copy" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.555173 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" containerName="copy" Dec 09 10:24:51 crc kubenswrapper[4786]: E1209 10:24:51.555244 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerName="extract-utilities" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.555310 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerName="extract-utilities" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.555663 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" containerName="gather" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.555760 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="acd85c6c-5868-4f57-9e9b-f7e9ba510a33" containerName="copy" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.555834 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="63aec87d-0cb4-4adf-925b-56b2b7b08d93" containerName="registry-server" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.557611 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.563683 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bv7xk"] Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.712392 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9q9s\" (UniqueName: \"kubernetes.io/projected/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-kube-api-access-j9q9s\") pod \"community-operators-bv7xk\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.712457 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-catalog-content\") pod \"community-operators-bv7xk\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.712633 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-utilities\") pod \"community-operators-bv7xk\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.814875 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9q9s\" (UniqueName: \"kubernetes.io/projected/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-kube-api-access-j9q9s\") pod \"community-operators-bv7xk\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.814914 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-catalog-content\") pod \"community-operators-bv7xk\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.814983 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-utilities\") pod \"community-operators-bv7xk\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.815480 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-utilities\") pod \"community-operators-bv7xk\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.815497 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-catalog-content\") pod \"community-operators-bv7xk\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.842281 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9q9s\" (UniqueName: \"kubernetes.io/projected/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-kube-api-access-j9q9s\") pod \"community-operators-bv7xk\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:51 crc kubenswrapper[4786]: I1209 10:24:51.909822 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:24:52 crc kubenswrapper[4786]: I1209 10:24:52.532897 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bv7xk"] Dec 09 10:24:52 crc kubenswrapper[4786]: I1209 10:24:52.949459 4786 generic.go:334] "Generic (PLEG): container finished" podID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerID="575d73ba2b095453a0ff9ad23f94928a1387d41c9874eaf31774f322b0300730" exitCode=0 Dec 09 10:24:52 crc kubenswrapper[4786]: I1209 10:24:52.949751 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv7xk" event={"ID":"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae","Type":"ContainerDied","Data":"575d73ba2b095453a0ff9ad23f94928a1387d41c9874eaf31774f322b0300730"} Dec 09 10:24:52 crc kubenswrapper[4786]: I1209 10:24:52.949838 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv7xk" event={"ID":"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae","Type":"ContainerStarted","Data":"3f5df07e2ad2a04ee469ee6be96ba81e9323f497215365f1c079c7ca62a8fe32"} Dec 09 10:24:52 crc kubenswrapper[4786]: I1209 10:24:52.970199 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 10:24:53 crc kubenswrapper[4786]: I1209 10:24:53.983006 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv7xk" event={"ID":"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae","Type":"ContainerStarted","Data":"2e6ab483dfebab27a0859692598c84f5238bec0395fc8fed31da11f435186c8c"} Dec 09 10:24:54 crc kubenswrapper[4786]: I1209 10:24:54.996685 4786 generic.go:334] "Generic (PLEG): container finished" podID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerID="2e6ab483dfebab27a0859692598c84f5238bec0395fc8fed31da11f435186c8c" exitCode=0 Dec 09 10:24:54 crc kubenswrapper[4786]: I1209 10:24:54.996785 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv7xk" event={"ID":"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae","Type":"ContainerDied","Data":"2e6ab483dfebab27a0859692598c84f5238bec0395fc8fed31da11f435186c8c"} Dec 09 10:24:56 crc kubenswrapper[4786]: I1209 10:24:56.011128 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv7xk" event={"ID":"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae","Type":"ContainerStarted","Data":"1b46fa403e92dd0fdd3f15b7692a329b1a07f5aa969ba01d0eb292e1245941cb"} Dec 09 10:24:56 crc kubenswrapper[4786]: I1209 10:24:56.037079 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bv7xk" podStartSLOduration=2.590922925 podStartE2EDuration="5.037059819s" podCreationTimestamp="2025-12-09 10:24:51 +0000 UTC" firstStartedPulling="2025-12-09 10:24:52.969868256 +0000 UTC m=+6058.853489492" lastFinishedPulling="2025-12-09 10:24:55.41600516 +0000 UTC m=+6061.299626386" observedRunningTime="2025-12-09 10:24:56.030749624 +0000 UTC m=+6061.914370860" watchObservedRunningTime="2025-12-09 10:24:56.037059819 +0000 UTC m=+6061.920681035" Dec 09 10:25:01 crc kubenswrapper[4786]: I1209 10:25:01.910953 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:25:01 crc kubenswrapper[4786]: I1209 10:25:01.912607 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:25:01 crc kubenswrapper[4786]: I1209 10:25:01.960118 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:25:02 crc kubenswrapper[4786]: I1209 10:25:02.149032 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:25:02 crc kubenswrapper[4786]: I1209 10:25:02.229461 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bv7xk"] Dec 09 10:25:04 crc kubenswrapper[4786]: I1209 10:25:04.087677 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bv7xk" podUID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerName="registry-server" containerID="cri-o://1b46fa403e92dd0fdd3f15b7692a329b1a07f5aa969ba01d0eb292e1245941cb" gracePeriod=2 Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.099825 4786 generic.go:334] "Generic (PLEG): container finished" podID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerID="1b46fa403e92dd0fdd3f15b7692a329b1a07f5aa969ba01d0eb292e1245941cb" exitCode=0 Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.100079 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv7xk" event={"ID":"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae","Type":"ContainerDied","Data":"1b46fa403e92dd0fdd3f15b7692a329b1a07f5aa969ba01d0eb292e1245941cb"} Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.100163 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bv7xk" event={"ID":"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae","Type":"ContainerDied","Data":"3f5df07e2ad2a04ee469ee6be96ba81e9323f497215365f1c079c7ca62a8fe32"} Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.100177 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f5df07e2ad2a04ee469ee6be96ba81e9323f497215365f1c079c7ca62a8fe32" Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.130179 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.319775 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-utilities\") pod \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.321020 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-catalog-content\") pod \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.321297 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-utilities" (OuterVolumeSpecName: "utilities") pod "ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" (UID: "ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.328079 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9q9s\" (UniqueName: \"kubernetes.io/projected/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-kube-api-access-j9q9s\") pod \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\" (UID: \"ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae\") " Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.329617 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.334975 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-kube-api-access-j9q9s" (OuterVolumeSpecName: "kube-api-access-j9q9s") pod "ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" (UID: "ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae"). InnerVolumeSpecName "kube-api-access-j9q9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.408209 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" (UID: "ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.431379 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9q9s\" (UniqueName: \"kubernetes.io/projected/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-kube-api-access-j9q9s\") on node \"crc\" DevicePath \"\"" Dec 09 10:25:05 crc kubenswrapper[4786]: I1209 10:25:05.431453 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:25:06 crc kubenswrapper[4786]: I1209 10:25:06.108983 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bv7xk" Dec 09 10:25:06 crc kubenswrapper[4786]: I1209 10:25:06.161572 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bv7xk"] Dec 09 10:25:06 crc kubenswrapper[4786]: I1209 10:25:06.173516 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bv7xk"] Dec 09 10:25:06 crc kubenswrapper[4786]: I1209 10:25:06.189148 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:25:06 crc kubenswrapper[4786]: E1209 10:25:06.189498 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:25:06 crc kubenswrapper[4786]: I1209 10:25:06.726183 4786 scope.go:117] "RemoveContainer" containerID="37ccffca5b57091e2ed5b1f0ad2fb0e8e9fd52e59410564951692ea008ee0aa1" Dec 09 10:25:06 crc kubenswrapper[4786]: I1209 10:25:06.753661 4786 scope.go:117] "RemoveContainer" containerID="a7a1d72c20d3d85cdfdb92e575f8e40b1631b34c163e4da4b707948f36e961e0" Dec 09 10:25:07 crc kubenswrapper[4786]: I1209 10:25:07.210483 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" path="/var/lib/kubelet/pods/ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae/volumes" Dec 09 10:25:19 crc kubenswrapper[4786]: I1209 10:25:19.188885 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:25:19 crc kubenswrapper[4786]: E1209 10:25:19.190107 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:25:34 crc kubenswrapper[4786]: I1209 10:25:34.188530 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:25:34 crc kubenswrapper[4786]: E1209 10:25:34.190523 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:25:49 crc kubenswrapper[4786]: I1209 10:25:49.188726 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:25:49 crc kubenswrapper[4786]: E1209 10:25:49.189760 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:26:02 crc kubenswrapper[4786]: I1209 10:26:02.188983 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:26:02 crc kubenswrapper[4786]: E1209 10:26:02.189925 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:26:17 crc kubenswrapper[4786]: I1209 10:26:17.188366 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:26:17 crc kubenswrapper[4786]: E1209 10:26:17.189142 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:26:32 crc kubenswrapper[4786]: I1209 10:26:32.189294 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:26:32 crc kubenswrapper[4786]: E1209 10:26:32.190142 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:26:46 crc kubenswrapper[4786]: I1209 10:26:46.189098 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:26:46 crc kubenswrapper[4786]: E1209 10:26:46.189869 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:27:00 crc kubenswrapper[4786]: I1209 10:27:00.188371 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:27:00 crc kubenswrapper[4786]: E1209 10:27:00.189640 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:27:15 crc kubenswrapper[4786]: I1209 10:27:15.196577 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:27:15 crc kubenswrapper[4786]: E1209 10:27:15.197682 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:27:29 crc kubenswrapper[4786]: I1209 10:27:29.188656 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:27:29 crc kubenswrapper[4786]: E1209 10:27:29.189769 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:27:43 crc kubenswrapper[4786]: I1209 10:27:43.188877 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:27:43 crc kubenswrapper[4786]: E1209 10:27:43.189850 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.386180 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bqjfr/must-gather-llsjs"] Dec 09 10:27:52 crc kubenswrapper[4786]: E1209 10:27:52.387568 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerName="extract-utilities" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.387597 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerName="extract-utilities" Dec 09 10:27:52 crc kubenswrapper[4786]: E1209 10:27:52.387661 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerName="extract-content" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.387674 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerName="extract-content" Dec 09 10:27:52 crc kubenswrapper[4786]: E1209 10:27:52.387711 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerName="registry-server" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.387722 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerName="registry-server" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.387976 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="ddb5fcdf-c8c7-4bff-a48a-d982c4a9f5ae" containerName="registry-server" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.389457 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.400801 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bqjfr"/"openshift-service-ca.crt" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.412814 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-bqjfr"/"kube-root-ca.crt" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.416752 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bqjfr/must-gather-llsjs"] Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.541908 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9v7c\" (UniqueName: \"kubernetes.io/projected/3013a14f-cbf4-44a7-9b6c-59999fcb053d-kube-api-access-w9v7c\") pod \"must-gather-llsjs\" (UID: \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\") " pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.541999 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3013a14f-cbf4-44a7-9b6c-59999fcb053d-must-gather-output\") pod \"must-gather-llsjs\" (UID: \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\") " pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.644230 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9v7c\" (UniqueName: \"kubernetes.io/projected/3013a14f-cbf4-44a7-9b6c-59999fcb053d-kube-api-access-w9v7c\") pod \"must-gather-llsjs\" (UID: \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\") " pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.644321 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3013a14f-cbf4-44a7-9b6c-59999fcb053d-must-gather-output\") pod \"must-gather-llsjs\" (UID: \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\") " pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.644729 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3013a14f-cbf4-44a7-9b6c-59999fcb053d-must-gather-output\") pod \"must-gather-llsjs\" (UID: \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\") " pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.677157 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9v7c\" (UniqueName: \"kubernetes.io/projected/3013a14f-cbf4-44a7-9b6c-59999fcb053d-kube-api-access-w9v7c\") pod \"must-gather-llsjs\" (UID: \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\") " pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:27:52 crc kubenswrapper[4786]: I1209 10:27:52.710535 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:27:53 crc kubenswrapper[4786]: I1209 10:27:53.178839 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-bqjfr/must-gather-llsjs"] Dec 09 10:27:54 crc kubenswrapper[4786]: I1209 10:27:54.000751 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/must-gather-llsjs" event={"ID":"3013a14f-cbf4-44a7-9b6c-59999fcb053d","Type":"ContainerStarted","Data":"813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4"} Dec 09 10:27:54 crc kubenswrapper[4786]: I1209 10:27:54.000810 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/must-gather-llsjs" event={"ID":"3013a14f-cbf4-44a7-9b6c-59999fcb053d","Type":"ContainerStarted","Data":"dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f"} Dec 09 10:27:54 crc kubenswrapper[4786]: I1209 10:27:54.000825 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/must-gather-llsjs" event={"ID":"3013a14f-cbf4-44a7-9b6c-59999fcb053d","Type":"ContainerStarted","Data":"aee59f7b0d0e12670785157fc5ea8d9a67b7871274fc47949823f64c5c82c54d"} Dec 09 10:27:54 crc kubenswrapper[4786]: I1209 10:27:54.018531 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bqjfr/must-gather-llsjs" podStartSLOduration=2.018514423 podStartE2EDuration="2.018514423s" podCreationTimestamp="2025-12-09 10:27:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 10:27:54.014000962 +0000 UTC m=+6239.897622198" watchObservedRunningTime="2025-12-09 10:27:54.018514423 +0000 UTC m=+6239.902135639" Dec 09 10:27:55 crc kubenswrapper[4786]: I1209 10:27:55.199232 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:27:55 crc kubenswrapper[4786]: E1209 10:27:55.202058 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:27:57 crc kubenswrapper[4786]: I1209 10:27:57.740377 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bqjfr/crc-debug-cwt42"] Dec 09 10:27:57 crc kubenswrapper[4786]: I1209 10:27:57.742480 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:27:57 crc kubenswrapper[4786]: I1209 10:27:57.748361 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bqjfr"/"default-dockercfg-65c6q" Dec 09 10:27:57 crc kubenswrapper[4786]: I1209 10:27:57.891807 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-host\") pod \"crc-debug-cwt42\" (UID: \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\") " pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:27:57 crc kubenswrapper[4786]: I1209 10:27:57.892056 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k76jd\" (UniqueName: \"kubernetes.io/projected/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-kube-api-access-k76jd\") pod \"crc-debug-cwt42\" (UID: \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\") " pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:27:57 crc kubenswrapper[4786]: I1209 10:27:57.997813 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-host\") pod \"crc-debug-cwt42\" (UID: \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\") " pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:27:57 crc kubenswrapper[4786]: I1209 10:27:57.997905 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k76jd\" (UniqueName: \"kubernetes.io/projected/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-kube-api-access-k76jd\") pod \"crc-debug-cwt42\" (UID: \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\") " pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:27:57 crc kubenswrapper[4786]: I1209 10:27:57.997989 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-host\") pod \"crc-debug-cwt42\" (UID: \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\") " pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:27:58 crc kubenswrapper[4786]: I1209 10:27:58.020126 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k76jd\" (UniqueName: \"kubernetes.io/projected/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-kube-api-access-k76jd\") pod \"crc-debug-cwt42\" (UID: \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\") " pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:27:58 crc kubenswrapper[4786]: I1209 10:27:58.063239 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:27:58 crc kubenswrapper[4786]: W1209 10:27:58.103300 4786 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4071fc1f_b12e_47bf_a986_8dc5c4ea83fc.slice/crio-da92bf6408eb0131eca5cb04e70086d6421cb528bdc7f16c3dd814fd4cea0fb0 WatchSource:0}: Error finding container da92bf6408eb0131eca5cb04e70086d6421cb528bdc7f16c3dd814fd4cea0fb0: Status 404 returned error can't find the container with id da92bf6408eb0131eca5cb04e70086d6421cb528bdc7f16c3dd814fd4cea0fb0 Dec 09 10:27:59 crc kubenswrapper[4786]: I1209 10:27:59.063377 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/crc-debug-cwt42" event={"ID":"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc","Type":"ContainerStarted","Data":"cb6d7bfe82b6f0f65d1f238d4107dc7a56b8423d9f3287a1a3f608d38df694c8"} Dec 09 10:27:59 crc kubenswrapper[4786]: I1209 10:27:59.063999 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/crc-debug-cwt42" event={"ID":"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc","Type":"ContainerStarted","Data":"da92bf6408eb0131eca5cb04e70086d6421cb528bdc7f16c3dd814fd4cea0fb0"} Dec 09 10:27:59 crc kubenswrapper[4786]: I1209 10:27:59.089943 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-bqjfr/crc-debug-cwt42" podStartSLOduration=2.089915838 podStartE2EDuration="2.089915838s" podCreationTimestamp="2025-12-09 10:27:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-09 10:27:59.080215141 +0000 UTC m=+6244.963836387" watchObservedRunningTime="2025-12-09 10:27:59.089915838 +0000 UTC m=+6244.973537084" Dec 09 10:28:08 crc kubenswrapper[4786]: I1209 10:28:08.188637 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:28:08 crc kubenswrapper[4786]: E1209 10:28:08.189530 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:28:22 crc kubenswrapper[4786]: I1209 10:28:22.188980 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:28:22 crc kubenswrapper[4786]: E1209 10:28:22.189681 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:28:35 crc kubenswrapper[4786]: I1209 10:28:35.196025 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:28:35 crc kubenswrapper[4786]: E1209 10:28:35.196691 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:28:39 crc kubenswrapper[4786]: I1209 10:28:39.522709 4786 generic.go:334] "Generic (PLEG): container finished" podID="4071fc1f-b12e-47bf-a986-8dc5c4ea83fc" containerID="cb6d7bfe82b6f0f65d1f238d4107dc7a56b8423d9f3287a1a3f608d38df694c8" exitCode=0 Dec 09 10:28:39 crc kubenswrapper[4786]: I1209 10:28:39.523334 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/crc-debug-cwt42" event={"ID":"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc","Type":"ContainerDied","Data":"cb6d7bfe82b6f0f65d1f238d4107dc7a56b8423d9f3287a1a3f608d38df694c8"} Dec 09 10:28:40 crc kubenswrapper[4786]: I1209 10:28:40.663718 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:28:40 crc kubenswrapper[4786]: I1209 10:28:40.698048 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bqjfr/crc-debug-cwt42"] Dec 09 10:28:40 crc kubenswrapper[4786]: I1209 10:28:40.707486 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bqjfr/crc-debug-cwt42"] Dec 09 10:28:40 crc kubenswrapper[4786]: I1209 10:28:40.775468 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k76jd\" (UniqueName: \"kubernetes.io/projected/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-kube-api-access-k76jd\") pod \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\" (UID: \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\") " Dec 09 10:28:40 crc kubenswrapper[4786]: I1209 10:28:40.775558 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-host\") pod \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\" (UID: \"4071fc1f-b12e-47bf-a986-8dc5c4ea83fc\") " Dec 09 10:28:40 crc kubenswrapper[4786]: I1209 10:28:40.776037 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-host" (OuterVolumeSpecName: "host") pod "4071fc1f-b12e-47bf-a986-8dc5c4ea83fc" (UID: "4071fc1f-b12e-47bf-a986-8dc5c4ea83fc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 10:28:40 crc kubenswrapper[4786]: I1209 10:28:40.780803 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-kube-api-access-k76jd" (OuterVolumeSpecName: "kube-api-access-k76jd") pod "4071fc1f-b12e-47bf-a986-8dc5c4ea83fc" (UID: "4071fc1f-b12e-47bf-a986-8dc5c4ea83fc"). InnerVolumeSpecName "kube-api-access-k76jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:28:40 crc kubenswrapper[4786]: I1209 10:28:40.878374 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k76jd\" (UniqueName: \"kubernetes.io/projected/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-kube-api-access-k76jd\") on node \"crc\" DevicePath \"\"" Dec 09 10:28:40 crc kubenswrapper[4786]: I1209 10:28:40.878416 4786 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc-host\") on node \"crc\" DevicePath \"\"" Dec 09 10:28:41 crc kubenswrapper[4786]: I1209 10:28:41.199247 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4071fc1f-b12e-47bf-a986-8dc5c4ea83fc" path="/var/lib/kubelet/pods/4071fc1f-b12e-47bf-a986-8dc5c4ea83fc/volumes" Dec 09 10:28:41 crc kubenswrapper[4786]: I1209 10:28:41.544128 4786 scope.go:117] "RemoveContainer" containerID="cb6d7bfe82b6f0f65d1f238d4107dc7a56b8423d9f3287a1a3f608d38df694c8" Dec 09 10:28:41 crc kubenswrapper[4786]: I1209 10:28:41.544447 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-cwt42" Dec 09 10:28:41 crc kubenswrapper[4786]: I1209 10:28:41.892201 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bqjfr/crc-debug-qj5qn"] Dec 09 10:28:41 crc kubenswrapper[4786]: E1209 10:28:41.892703 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4071fc1f-b12e-47bf-a986-8dc5c4ea83fc" containerName="container-00" Dec 09 10:28:41 crc kubenswrapper[4786]: I1209 10:28:41.892716 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="4071fc1f-b12e-47bf-a986-8dc5c4ea83fc" containerName="container-00" Dec 09 10:28:41 crc kubenswrapper[4786]: I1209 10:28:41.892965 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="4071fc1f-b12e-47bf-a986-8dc5c4ea83fc" containerName="container-00" Dec 09 10:28:41 crc kubenswrapper[4786]: I1209 10:28:41.893813 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:41 crc kubenswrapper[4786]: I1209 10:28:41.895762 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bqjfr"/"default-dockercfg-65c6q" Dec 09 10:28:42 crc kubenswrapper[4786]: I1209 10:28:42.220235 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lc4l\" (UniqueName: \"kubernetes.io/projected/638e15dd-dc52-4293-ac6a-0a33ac088f6b-kube-api-access-7lc4l\") pod \"crc-debug-qj5qn\" (UID: \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\") " pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:42 crc kubenswrapper[4786]: I1209 10:28:42.220402 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638e15dd-dc52-4293-ac6a-0a33ac088f6b-host\") pod \"crc-debug-qj5qn\" (UID: \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\") " pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:42 crc kubenswrapper[4786]: I1209 10:28:42.322872 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638e15dd-dc52-4293-ac6a-0a33ac088f6b-host\") pod \"crc-debug-qj5qn\" (UID: \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\") " pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:42 crc kubenswrapper[4786]: I1209 10:28:42.323024 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lc4l\" (UniqueName: \"kubernetes.io/projected/638e15dd-dc52-4293-ac6a-0a33ac088f6b-kube-api-access-7lc4l\") pod \"crc-debug-qj5qn\" (UID: \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\") " pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:42 crc kubenswrapper[4786]: I1209 10:28:42.323705 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638e15dd-dc52-4293-ac6a-0a33ac088f6b-host\") pod \"crc-debug-qj5qn\" (UID: \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\") " pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:42 crc kubenswrapper[4786]: I1209 10:28:42.345615 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lc4l\" (UniqueName: \"kubernetes.io/projected/638e15dd-dc52-4293-ac6a-0a33ac088f6b-kube-api-access-7lc4l\") pod \"crc-debug-qj5qn\" (UID: \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\") " pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:42 crc kubenswrapper[4786]: I1209 10:28:42.516971 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:43 crc kubenswrapper[4786]: I1209 10:28:43.576519 4786 generic.go:334] "Generic (PLEG): container finished" podID="638e15dd-dc52-4293-ac6a-0a33ac088f6b" containerID="324a66be055f5eb9b2af818f90e0337073632933c63cf09f0b814149b113cec9" exitCode=0 Dec 09 10:28:43 crc kubenswrapper[4786]: I1209 10:28:43.576572 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" event={"ID":"638e15dd-dc52-4293-ac6a-0a33ac088f6b","Type":"ContainerDied","Data":"324a66be055f5eb9b2af818f90e0337073632933c63cf09f0b814149b113cec9"} Dec 09 10:28:43 crc kubenswrapper[4786]: I1209 10:28:43.576971 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" event={"ID":"638e15dd-dc52-4293-ac6a-0a33ac088f6b","Type":"ContainerStarted","Data":"4b3db0f5b1952fe9a9242d58c32355a7ab267214e76cdf28ddc6e4f46bf3faeb"} Dec 09 10:28:44 crc kubenswrapper[4786]: I1209 10:28:44.779140 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:44 crc kubenswrapper[4786]: I1209 10:28:44.878004 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638e15dd-dc52-4293-ac6a-0a33ac088f6b-host\") pod \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\" (UID: \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\") " Dec 09 10:28:44 crc kubenswrapper[4786]: I1209 10:28:44.878214 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lc4l\" (UniqueName: \"kubernetes.io/projected/638e15dd-dc52-4293-ac6a-0a33ac088f6b-kube-api-access-7lc4l\") pod \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\" (UID: \"638e15dd-dc52-4293-ac6a-0a33ac088f6b\") " Dec 09 10:28:44 crc kubenswrapper[4786]: I1209 10:28:44.878501 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/638e15dd-dc52-4293-ac6a-0a33ac088f6b-host" (OuterVolumeSpecName: "host") pod "638e15dd-dc52-4293-ac6a-0a33ac088f6b" (UID: "638e15dd-dc52-4293-ac6a-0a33ac088f6b"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 10:28:44 crc kubenswrapper[4786]: I1209 10:28:44.878998 4786 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/638e15dd-dc52-4293-ac6a-0a33ac088f6b-host\") on node \"crc\" DevicePath \"\"" Dec 09 10:28:44 crc kubenswrapper[4786]: I1209 10:28:44.884833 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/638e15dd-dc52-4293-ac6a-0a33ac088f6b-kube-api-access-7lc4l" (OuterVolumeSpecName: "kube-api-access-7lc4l") pod "638e15dd-dc52-4293-ac6a-0a33ac088f6b" (UID: "638e15dd-dc52-4293-ac6a-0a33ac088f6b"). InnerVolumeSpecName "kube-api-access-7lc4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:28:45 crc kubenswrapper[4786]: I1209 10:28:45.003276 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lc4l\" (UniqueName: \"kubernetes.io/projected/638e15dd-dc52-4293-ac6a-0a33ac088f6b-kube-api-access-7lc4l\") on node \"crc\" DevicePath \"\"" Dec 09 10:28:45 crc kubenswrapper[4786]: I1209 10:28:45.605514 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" event={"ID":"638e15dd-dc52-4293-ac6a-0a33ac088f6b","Type":"ContainerDied","Data":"4b3db0f5b1952fe9a9242d58c32355a7ab267214e76cdf28ddc6e4f46bf3faeb"} Dec 09 10:28:45 crc kubenswrapper[4786]: I1209 10:28:45.605830 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b3db0f5b1952fe9a9242d58c32355a7ab267214e76cdf28ddc6e4f46bf3faeb" Dec 09 10:28:45 crc kubenswrapper[4786]: I1209 10:28:45.605884 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-qj5qn" Dec 09 10:28:45 crc kubenswrapper[4786]: I1209 10:28:45.854589 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bqjfr/crc-debug-qj5qn"] Dec 09 10:28:45 crc kubenswrapper[4786]: I1209 10:28:45.864311 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bqjfr/crc-debug-qj5qn"] Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.152808 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-bqjfr/crc-debug-z2lxl"] Dec 09 10:28:47 crc kubenswrapper[4786]: E1209 10:28:47.153595 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="638e15dd-dc52-4293-ac6a-0a33ac088f6b" containerName="container-00" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.153609 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="638e15dd-dc52-4293-ac6a-0a33ac088f6b" containerName="container-00" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.153836 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="638e15dd-dc52-4293-ac6a-0a33ac088f6b" containerName="container-00" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.154642 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.157334 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-bqjfr"/"default-dockercfg-65c6q" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.200971 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="638e15dd-dc52-4293-ac6a-0a33ac088f6b" path="/var/lib/kubelet/pods/638e15dd-dc52-4293-ac6a-0a33ac088f6b/volumes" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.265955 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qlkc\" (UniqueName: \"kubernetes.io/projected/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-kube-api-access-9qlkc\") pod \"crc-debug-z2lxl\" (UID: \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\") " pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.266155 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-host\") pod \"crc-debug-z2lxl\" (UID: \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\") " pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.454211 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qlkc\" (UniqueName: \"kubernetes.io/projected/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-kube-api-access-9qlkc\") pod \"crc-debug-z2lxl\" (UID: \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\") " pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.454337 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-host\") pod \"crc-debug-z2lxl\" (UID: \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\") " pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.454853 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-host\") pod \"crc-debug-z2lxl\" (UID: \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\") " pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.475400 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qlkc\" (UniqueName: \"kubernetes.io/projected/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-kube-api-access-9qlkc\") pod \"crc-debug-z2lxl\" (UID: \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\") " pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:47 crc kubenswrapper[4786]: I1209 10:28:47.773735 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:48 crc kubenswrapper[4786]: I1209 10:28:48.635334 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" event={"ID":"13160a7c-0ab9-4fef-8cb1-40282bd16aa7","Type":"ContainerStarted","Data":"d3497286ce2fd1234b8bc5f626aada04c11dc8aa3fa9c0128e6573cf8fefd727"} Dec 09 10:28:49 crc kubenswrapper[4786]: I1209 10:28:49.193448 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:28:49 crc kubenswrapper[4786]: E1209 10:28:49.193717 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:28:49 crc kubenswrapper[4786]: I1209 10:28:49.647882 4786 generic.go:334] "Generic (PLEG): container finished" podID="13160a7c-0ab9-4fef-8cb1-40282bd16aa7" containerID="7fb80adebf88934de2b14608a3bc2e1bda4719673f73cb300ea714031a6372cd" exitCode=0 Dec 09 10:28:49 crc kubenswrapper[4786]: I1209 10:28:49.647960 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" event={"ID":"13160a7c-0ab9-4fef-8cb1-40282bd16aa7","Type":"ContainerDied","Data":"7fb80adebf88934de2b14608a3bc2e1bda4719673f73cb300ea714031a6372cd"} Dec 09 10:28:49 crc kubenswrapper[4786]: I1209 10:28:49.694692 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bqjfr/crc-debug-z2lxl"] Dec 09 10:28:49 crc kubenswrapper[4786]: I1209 10:28:49.706647 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bqjfr/crc-debug-z2lxl"] Dec 09 10:28:50 crc kubenswrapper[4786]: I1209 10:28:50.767075 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:50 crc kubenswrapper[4786]: I1209 10:28:50.798176 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qlkc\" (UniqueName: \"kubernetes.io/projected/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-kube-api-access-9qlkc\") pod \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\" (UID: \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\") " Dec 09 10:28:50 crc kubenswrapper[4786]: I1209 10:28:50.798275 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-host\") pod \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\" (UID: \"13160a7c-0ab9-4fef-8cb1-40282bd16aa7\") " Dec 09 10:28:50 crc kubenswrapper[4786]: I1209 10:28:50.798363 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-host" (OuterVolumeSpecName: "host") pod "13160a7c-0ab9-4fef-8cb1-40282bd16aa7" (UID: "13160a7c-0ab9-4fef-8cb1-40282bd16aa7"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 09 10:28:50 crc kubenswrapper[4786]: I1209 10:28:50.798946 4786 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-host\") on node \"crc\" DevicePath \"\"" Dec 09 10:28:50 crc kubenswrapper[4786]: I1209 10:28:50.804243 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-kube-api-access-9qlkc" (OuterVolumeSpecName: "kube-api-access-9qlkc") pod "13160a7c-0ab9-4fef-8cb1-40282bd16aa7" (UID: "13160a7c-0ab9-4fef-8cb1-40282bd16aa7"). InnerVolumeSpecName "kube-api-access-9qlkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:28:50 crc kubenswrapper[4786]: I1209 10:28:50.901207 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qlkc\" (UniqueName: \"kubernetes.io/projected/13160a7c-0ab9-4fef-8cb1-40282bd16aa7-kube-api-access-9qlkc\") on node \"crc\" DevicePath \"\"" Dec 09 10:28:51 crc kubenswrapper[4786]: I1209 10:28:51.199595 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13160a7c-0ab9-4fef-8cb1-40282bd16aa7" path="/var/lib/kubelet/pods/13160a7c-0ab9-4fef-8cb1-40282bd16aa7/volumes" Dec 09 10:28:51 crc kubenswrapper[4786]: I1209 10:28:51.674305 4786 scope.go:117] "RemoveContainer" containerID="7fb80adebf88934de2b14608a3bc2e1bda4719673f73cb300ea714031a6372cd" Dec 09 10:28:51 crc kubenswrapper[4786]: I1209 10:28:51.674558 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/crc-debug-z2lxl" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.239806 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7cgvz"] Dec 09 10:28:52 crc kubenswrapper[4786]: E1209 10:28:52.240754 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13160a7c-0ab9-4fef-8cb1-40282bd16aa7" containerName="container-00" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.240777 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="13160a7c-0ab9-4fef-8cb1-40282bd16aa7" containerName="container-00" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.241090 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="13160a7c-0ab9-4fef-8cb1-40282bd16aa7" containerName="container-00" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.243256 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.255314 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cgvz"] Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.332250 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcmf9\" (UniqueName: \"kubernetes.io/projected/23890e90-7f55-4892-8239-276ad50e4907-kube-api-access-wcmf9\") pod \"redhat-marketplace-7cgvz\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.332633 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-catalog-content\") pod \"redhat-marketplace-7cgvz\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.332958 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-utilities\") pod \"redhat-marketplace-7cgvz\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.435966 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-utilities\") pod \"redhat-marketplace-7cgvz\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.436134 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcmf9\" (UniqueName: \"kubernetes.io/projected/23890e90-7f55-4892-8239-276ad50e4907-kube-api-access-wcmf9\") pod \"redhat-marketplace-7cgvz\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.436212 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-catalog-content\") pod \"redhat-marketplace-7cgvz\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.437023 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-catalog-content\") pod \"redhat-marketplace-7cgvz\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.437320 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-utilities\") pod \"redhat-marketplace-7cgvz\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.467545 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcmf9\" (UniqueName: \"kubernetes.io/projected/23890e90-7f55-4892-8239-276ad50e4907-kube-api-access-wcmf9\") pod \"redhat-marketplace-7cgvz\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:52 crc kubenswrapper[4786]: I1209 10:28:52.695761 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:28:53 crc kubenswrapper[4786]: I1209 10:28:53.325267 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cgvz"] Dec 09 10:28:53 crc kubenswrapper[4786]: I1209 10:28:53.749593 4786 generic.go:334] "Generic (PLEG): container finished" podID="23890e90-7f55-4892-8239-276ad50e4907" containerID="2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33" exitCode=0 Dec 09 10:28:53 crc kubenswrapper[4786]: I1209 10:28:53.749702 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cgvz" event={"ID":"23890e90-7f55-4892-8239-276ad50e4907","Type":"ContainerDied","Data":"2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33"} Dec 09 10:28:53 crc kubenswrapper[4786]: I1209 10:28:53.750732 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cgvz" event={"ID":"23890e90-7f55-4892-8239-276ad50e4907","Type":"ContainerStarted","Data":"729d1da0a4d3a4bd4593a2f4f41542061bf02e5a973a3493e11c83c7ec9adc6c"} Dec 09 10:28:54 crc kubenswrapper[4786]: I1209 10:28:54.762503 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cgvz" event={"ID":"23890e90-7f55-4892-8239-276ad50e4907","Type":"ContainerStarted","Data":"ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7"} Dec 09 10:28:55 crc kubenswrapper[4786]: I1209 10:28:55.774693 4786 generic.go:334] "Generic (PLEG): container finished" podID="23890e90-7f55-4892-8239-276ad50e4907" containerID="ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7" exitCode=0 Dec 09 10:28:55 crc kubenswrapper[4786]: I1209 10:28:55.774743 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cgvz" event={"ID":"23890e90-7f55-4892-8239-276ad50e4907","Type":"ContainerDied","Data":"ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7"} Dec 09 10:28:56 crc kubenswrapper[4786]: I1209 10:28:56.787854 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cgvz" event={"ID":"23890e90-7f55-4892-8239-276ad50e4907","Type":"ContainerStarted","Data":"277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427"} Dec 09 10:28:56 crc kubenswrapper[4786]: I1209 10:28:56.815649 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7cgvz" podStartSLOduration=2.371457312 podStartE2EDuration="4.815630496s" podCreationTimestamp="2025-12-09 10:28:52 +0000 UTC" firstStartedPulling="2025-12-09 10:28:53.751476669 +0000 UTC m=+6299.635097905" lastFinishedPulling="2025-12-09 10:28:56.195649863 +0000 UTC m=+6302.079271089" observedRunningTime="2025-12-09 10:28:56.809161698 +0000 UTC m=+6302.692782924" watchObservedRunningTime="2025-12-09 10:28:56.815630496 +0000 UTC m=+6302.699251712" Dec 09 10:29:01 crc kubenswrapper[4786]: I1209 10:29:01.188442 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:29:01 crc kubenswrapper[4786]: I1209 10:29:01.842568 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"e017fad71b8ce5488d1f14545c2c965b6f7d76312d52fe6cfdbc755f86bd6aa2"} Dec 09 10:29:02 crc kubenswrapper[4786]: I1209 10:29:02.697535 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:29:02 crc kubenswrapper[4786]: I1209 10:29:02.698047 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:29:02 crc kubenswrapper[4786]: I1209 10:29:02.743930 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:29:02 crc kubenswrapper[4786]: I1209 10:29:02.913758 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:29:02 crc kubenswrapper[4786]: I1209 10:29:02.986670 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cgvz"] Dec 09 10:29:04 crc kubenswrapper[4786]: I1209 10:29:04.877026 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7cgvz" podUID="23890e90-7f55-4892-8239-276ad50e4907" containerName="registry-server" containerID="cri-o://277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427" gracePeriod=2 Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.408313 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.613418 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-catalog-content\") pod \"23890e90-7f55-4892-8239-276ad50e4907\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.613600 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-utilities\") pod \"23890e90-7f55-4892-8239-276ad50e4907\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.613675 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcmf9\" (UniqueName: \"kubernetes.io/projected/23890e90-7f55-4892-8239-276ad50e4907-kube-api-access-wcmf9\") pod \"23890e90-7f55-4892-8239-276ad50e4907\" (UID: \"23890e90-7f55-4892-8239-276ad50e4907\") " Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.614594 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-utilities" (OuterVolumeSpecName: "utilities") pod "23890e90-7f55-4892-8239-276ad50e4907" (UID: "23890e90-7f55-4892-8239-276ad50e4907"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.628670 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23890e90-7f55-4892-8239-276ad50e4907-kube-api-access-wcmf9" (OuterVolumeSpecName: "kube-api-access-wcmf9") pod "23890e90-7f55-4892-8239-276ad50e4907" (UID: "23890e90-7f55-4892-8239-276ad50e4907"). InnerVolumeSpecName "kube-api-access-wcmf9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.635458 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23890e90-7f55-4892-8239-276ad50e4907" (UID: "23890e90-7f55-4892-8239-276ad50e4907"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.715551 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.715589 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcmf9\" (UniqueName: \"kubernetes.io/projected/23890e90-7f55-4892-8239-276ad50e4907-kube-api-access-wcmf9\") on node \"crc\" DevicePath \"\"" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.715600 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23890e90-7f55-4892-8239-276ad50e4907-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.887581 4786 generic.go:334] "Generic (PLEG): container finished" podID="23890e90-7f55-4892-8239-276ad50e4907" containerID="277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427" exitCode=0 Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.887671 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cgvz" event={"ID":"23890e90-7f55-4892-8239-276ad50e4907","Type":"ContainerDied","Data":"277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427"} Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.887727 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7cgvz" event={"ID":"23890e90-7f55-4892-8239-276ad50e4907","Type":"ContainerDied","Data":"729d1da0a4d3a4bd4593a2f4f41542061bf02e5a973a3493e11c83c7ec9adc6c"} Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.887751 4786 scope.go:117] "RemoveContainer" containerID="277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.887674 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7cgvz" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.916833 4786 scope.go:117] "RemoveContainer" containerID="ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7" Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.926669 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cgvz"] Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.935886 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7cgvz"] Dec 09 10:29:05 crc kubenswrapper[4786]: I1209 10:29:05.960243 4786 scope.go:117] "RemoveContainer" containerID="2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33" Dec 09 10:29:06 crc kubenswrapper[4786]: I1209 10:29:06.012483 4786 scope.go:117] "RemoveContainer" containerID="277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427" Dec 09 10:29:06 crc kubenswrapper[4786]: E1209 10:29:06.012974 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427\": container with ID starting with 277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427 not found: ID does not exist" containerID="277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427" Dec 09 10:29:06 crc kubenswrapper[4786]: I1209 10:29:06.013084 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427"} err="failed to get container status \"277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427\": rpc error: code = NotFound desc = could not find container \"277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427\": container with ID starting with 277e0b1319830d20a5dd697aee05b565237af9c1e99a858b80315759765e1427 not found: ID does not exist" Dec 09 10:29:06 crc kubenswrapper[4786]: I1209 10:29:06.013181 4786 scope.go:117] "RemoveContainer" containerID="ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7" Dec 09 10:29:06 crc kubenswrapper[4786]: E1209 10:29:06.014076 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7\": container with ID starting with ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7 not found: ID does not exist" containerID="ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7" Dec 09 10:29:06 crc kubenswrapper[4786]: I1209 10:29:06.014117 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7"} err="failed to get container status \"ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7\": rpc error: code = NotFound desc = could not find container \"ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7\": container with ID starting with ba22fde1beefffb5f1846c807f3182d811809ae018204b17f1483eb3415033c7 not found: ID does not exist" Dec 09 10:29:06 crc kubenswrapper[4786]: I1209 10:29:06.014149 4786 scope.go:117] "RemoveContainer" containerID="2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33" Dec 09 10:29:06 crc kubenswrapper[4786]: E1209 10:29:06.014549 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33\": container with ID starting with 2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33 not found: ID does not exist" containerID="2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33" Dec 09 10:29:06 crc kubenswrapper[4786]: I1209 10:29:06.014631 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33"} err="failed to get container status \"2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33\": rpc error: code = NotFound desc = could not find container \"2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33\": container with ID starting with 2d3b061d6ea85291190e44a0593ace442aa85592e17569ef78acd12aeeb98d33 not found: ID does not exist" Dec 09 10:29:07 crc kubenswrapper[4786]: I1209 10:29:07.201861 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23890e90-7f55-4892-8239-276ad50e4907" path="/var/lib/kubelet/pods/23890e90-7f55-4892-8239-276ad50e4907/volumes" Dec 09 10:29:30 crc kubenswrapper[4786]: I1209 10:29:30.438217 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69744ddc66-fp6bq_be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53/barbican-api/0.log" Dec 09 10:29:30 crc kubenswrapper[4786]: I1209 10:29:30.585758 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fc48d579d-jv5m2_59cf76dd-ccdd-4aff-b6ae-a86c532b922c/barbican-keystone-listener/0.log" Dec 09 10:29:30 crc kubenswrapper[4786]: I1209 10:29:30.625318 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-69744ddc66-fp6bq_be7cb0a0-e8ca-4bb0-8be6-831d56ec8a53/barbican-api-log/0.log" Dec 09 10:29:30 crc kubenswrapper[4786]: I1209 10:29:30.804940 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-fc48d579d-jv5m2_59cf76dd-ccdd-4aff-b6ae-a86c532b922c/barbican-keystone-listener-log/0.log" Dec 09 10:29:30 crc kubenswrapper[4786]: I1209 10:29:30.857537 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76847f447c-24chp_515247e9-4278-40fa-b971-adb499dc3ce0/barbican-worker/0.log" Dec 09 10:29:30 crc kubenswrapper[4786]: I1209 10:29:30.963624 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-76847f447c-24chp_515247e9-4278-40fa-b971-adb499dc3ce0/barbican-worker-log/0.log" Dec 09 10:29:31 crc kubenswrapper[4786]: I1209 10:29:31.054604 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-twhbz_ebb0da1f-f03a-4091-9057-2d250dd6bc07/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:31 crc kubenswrapper[4786]: I1209 10:29:31.283361 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cffa1372-a308-4145-a2ab-8e320fc5d296/ceilometer-central-agent/0.log" Dec 09 10:29:31 crc kubenswrapper[4786]: I1209 10:29:31.372994 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cffa1372-a308-4145-a2ab-8e320fc5d296/proxy-httpd/0.log" Dec 09 10:29:31 crc kubenswrapper[4786]: I1209 10:29:31.374892 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cffa1372-a308-4145-a2ab-8e320fc5d296/ceilometer-notification-agent/0.log" Dec 09 10:29:31 crc kubenswrapper[4786]: I1209 10:29:31.452025 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_cffa1372-a308-4145-a2ab-8e320fc5d296/sg-core/0.log" Dec 09 10:29:31 crc kubenswrapper[4786]: I1209 10:29:31.625782 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb8b460d-7b22-4853-b592-ea61d203e5c1/cinder-api-log/0.log" Dec 09 10:29:31 crc kubenswrapper[4786]: I1209 10:29:31.937935 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e2b22ca8-e985-404e-af49-d7328d2d3017/probe/0.log" Dec 09 10:29:32 crc kubenswrapper[4786]: I1209 10:29:32.255728 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0afc7d13-3b0f-4919-ab29-4d328c815a8a/cinder-scheduler/0.log" Dec 09 10:29:32 crc kubenswrapper[4786]: I1209 10:29:32.315142 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_e2b22ca8-e985-404e-af49-d7328d2d3017/cinder-backup/0.log" Dec 09 10:29:32 crc kubenswrapper[4786]: I1209 10:29:32.336877 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_0afc7d13-3b0f-4919-ab29-4d328c815a8a/probe/0.log" Dec 09 10:29:32 crc kubenswrapper[4786]: I1209 10:29:32.426222 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_bb8b460d-7b22-4853-b592-ea61d203e5c1/cinder-api/0.log" Dec 09 10:29:32 crc kubenswrapper[4786]: I1209 10:29:32.609392 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_fc7497d0-e84b-4a17-8d33-b63bf384eee8/probe/0.log" Dec 09 10:29:32 crc kubenswrapper[4786]: I1209 10:29:32.741765 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-0_fc7497d0-e84b-4a17-8d33-b63bf384eee8/cinder-volume/0.log" Dec 09 10:29:32 crc kubenswrapper[4786]: I1209 10:29:32.897100 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df/probe/0.log" Dec 09 10:29:32 crc kubenswrapper[4786]: I1209 10:29:32.937830 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-tx29k_e2ad540b-313c-4600-bf54-c14c9a6a2969/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:33 crc kubenswrapper[4786]: I1209 10:29:33.007248 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-nfs-2-0_24cd21a8-4fa5-4a83-ab5c-2fdc0fea37df/cinder-volume/0.log" Dec 09 10:29:33 crc kubenswrapper[4786]: I1209 10:29:33.168123 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-cks8r_e084b124-3f74-48a9-a0e4-6c9bea0d7875/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:33 crc kubenswrapper[4786]: I1209 10:29:33.279888 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69b5dcdcbf-2dgn9_973019e1-5fa9-49b7-b291-fdd553108517/init/0.log" Dec 09 10:29:33 crc kubenswrapper[4786]: I1209 10:29:33.452700 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69b5dcdcbf-2dgn9_973019e1-5fa9-49b7-b291-fdd553108517/init/0.log" Dec 09 10:29:33 crc kubenswrapper[4786]: I1209 10:29:33.602713 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kn7qm_c31240e0-f612-4759-b933-3c2d89a10da3/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:33 crc kubenswrapper[4786]: I1209 10:29:33.665380 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-69b5dcdcbf-2dgn9_973019e1-5fa9-49b7-b291-fdd553108517/dnsmasq-dns/0.log" Dec 09 10:29:33 crc kubenswrapper[4786]: I1209 10:29:33.795108 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c137f18e-fc1e-42ac-a96b-c990c55664f7/glance-log/0.log" Dec 09 10:29:33 crc kubenswrapper[4786]: I1209 10:29:33.805868 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_c137f18e-fc1e-42ac-a96b-c990c55664f7/glance-httpd/0.log" Dec 09 10:29:34 crc kubenswrapper[4786]: I1209 10:29:34.130938 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c7759b2f-fa10-4c57-845a-773289198d2e/glance-httpd/0.log" Dec 09 10:29:34 crc kubenswrapper[4786]: I1209 10:29:34.163716 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_c7759b2f-fa10-4c57-845a-773289198d2e/glance-log/0.log" Dec 09 10:29:34 crc kubenswrapper[4786]: I1209 10:29:34.351457 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-655866bfb6-4l6wv_52a5cc59-1e73-4e04-ba05-80f0c364b351/horizon/0.log" Dec 09 10:29:34 crc kubenswrapper[4786]: I1209 10:29:34.489260 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qxpl2_a6d7c4cb-f4bc-49ef-b169-e662bd85bb4e/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:34 crc kubenswrapper[4786]: I1209 10:29:34.706652 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-bnlh7_d67c7c4e-faa8-427e-953b-829c4e277994/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:34 crc kubenswrapper[4786]: I1209 10:29:34.975791 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29421241-g22g6_16354109-b784-40f5-b196-1f1972f99264/keystone-cron/0.log" Dec 09 10:29:35 crc kubenswrapper[4786]: I1209 10:29:35.245348 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-655866bfb6-4l6wv_52a5cc59-1e73-4e04-ba05-80f0c364b351/horizon-log/0.log" Dec 09 10:29:35 crc kubenswrapper[4786]: I1209 10:29:35.332911 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e176f4e3-203b-4784-be15-d5e306723d08/kube-state-metrics/0.log" Dec 09 10:29:35 crc kubenswrapper[4786]: I1209 10:29:35.377721 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-hslht_e0e78e74-699b-442e-a5bd-6c598b2e0fb4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:35 crc kubenswrapper[4786]: I1209 10:29:35.503912 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-bc97f477f-xr7xf_078ae2f6-b658-48a8-b4c2-cff5f3847bd3/keystone-api/0.log" Dec 09 10:29:35 crc kubenswrapper[4786]: I1209 10:29:35.892248 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-cb6wn_17753443-b80a-43e1-9256-b7c0f392dad5/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:35 crc kubenswrapper[4786]: I1209 10:29:35.950286 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cb696f46f-55kzl_d41935c7-99f8-4d52-b0f4-691563bea9ee/neutron-api/0.log" Dec 09 10:29:35 crc kubenswrapper[4786]: I1209 10:29:35.994587 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5cb696f46f-55kzl_d41935c7-99f8-4d52-b0f4-691563bea9ee/neutron-httpd/0.log" Dec 09 10:29:36 crc kubenswrapper[4786]: I1209 10:29:36.763142 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_93a4b0d1-a277-448d-bab3-03c7131f23bf/nova-cell0-conductor-conductor/0.log" Dec 09 10:29:37 crc kubenswrapper[4786]: I1209 10:29:37.033862 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_8e4f1885-8b7d-4c02-9a83-22eb4aa3c7f4/nova-cell1-conductor-conductor/0.log" Dec 09 10:29:37 crc kubenswrapper[4786]: I1209 10:29:37.403815 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_349ccf59-f627-4673-84e7-215fc9d15e27/nova-cell1-novncproxy-novncproxy/0.log" Dec 09 10:29:37 crc kubenswrapper[4786]: I1209 10:29:37.646342 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-lnqtd_f1d1af6a-883d-4d29-8d4d-b477e99c2df5/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:37 crc kubenswrapper[4786]: I1209 10:29:37.665218 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_052f7fa7-4f28-421a-a1a4-d262f5d8c2de/nova-api-log/0.log" Dec 09 10:29:38 crc kubenswrapper[4786]: I1209 10:29:38.039607 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8a5c44e4-7244-46f8-983b-de6cd923bd74/nova-metadata-log/0.log" Dec 09 10:29:38 crc kubenswrapper[4786]: I1209 10:29:38.358444 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_052f7fa7-4f28-421a-a1a4-d262f5d8c2de/nova-api-api/0.log" Dec 09 10:29:38 crc kubenswrapper[4786]: I1209 10:29:38.574251 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8066cc20-76cd-4a47-a662-fb77cd5cbe3b/mysql-bootstrap/0.log" Dec 09 10:29:38 crc kubenswrapper[4786]: I1209 10:29:38.742935 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8066cc20-76cd-4a47-a662-fb77cd5cbe3b/mysql-bootstrap/0.log" Dec 09 10:29:38 crc kubenswrapper[4786]: I1209 10:29:38.742954 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_3c7e4255-6f20-4911-b8d9-862fb7b801da/nova-scheduler-scheduler/0.log" Dec 09 10:29:38 crc kubenswrapper[4786]: I1209 10:29:38.841066 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_8066cc20-76cd-4a47-a662-fb77cd5cbe3b/galera/0.log" Dec 09 10:29:38 crc kubenswrapper[4786]: I1209 10:29:38.984939 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_673b3525-c496-4268-b9f9-c37f5175efdc/mysql-bootstrap/0.log" Dec 09 10:29:39 crc kubenswrapper[4786]: I1209 10:29:39.380012 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_673b3525-c496-4268-b9f9-c37f5175efdc/mysql-bootstrap/0.log" Dec 09 10:29:39 crc kubenswrapper[4786]: I1209 10:29:39.386832 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_673b3525-c496-4268-b9f9-c37f5175efdc/galera/0.log" Dec 09 10:29:39 crc kubenswrapper[4786]: I1209 10:29:39.548029 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_2423b332-8b9b-4a26-996b-582194eca3b7/openstackclient/0.log" Dec 09 10:29:39 crc kubenswrapper[4786]: I1209 10:29:39.662627 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-s6vc4_283c4e6d-aae7-4c99-97dd-9da311e7efd3/openstack-network-exporter/0.log" Dec 09 10:29:39 crc kubenswrapper[4786]: I1209 10:29:39.892211 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lqv95_df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0/ovsdb-server-init/0.log" Dec 09 10:29:40 crc kubenswrapper[4786]: I1209 10:29:40.074421 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lqv95_df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0/ovsdb-server-init/0.log" Dec 09 10:29:40 crc kubenswrapper[4786]: I1209 10:29:40.126498 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lqv95_df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0/ovsdb-server/0.log" Dec 09 10:29:40 crc kubenswrapper[4786]: I1209 10:29:40.329363 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-vv7k4_7cad500b-e392-4774-a524-02587da67379/ovn-controller/0.log" Dec 09 10:29:40 crc kubenswrapper[4786]: I1209 10:29:40.574612 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-lqv95_df80e8f4-2c7e-44a0-b2f4-3c0b652c6fb0/ovs-vswitchd/0.log" Dec 09 10:29:40 crc kubenswrapper[4786]: I1209 10:29:40.640487 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jhml4_38e0ab2d-0650-44b5-bc00-adeb40608783/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:40 crc kubenswrapper[4786]: I1209 10:29:40.861724 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_8a5c44e4-7244-46f8-983b-de6cd923bd74/nova-metadata-metadata/0.log" Dec 09 10:29:40 crc kubenswrapper[4786]: I1209 10:29:40.884995 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f79a7868-1d59-4d6f-ac51-528c634a9b4f/openstack-network-exporter/0.log" Dec 09 10:29:40 crc kubenswrapper[4786]: I1209 10:29:40.889124 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_f79a7868-1d59-4d6f-ac51-528c634a9b4f/ovn-northd/0.log" Dec 09 10:29:41 crc kubenswrapper[4786]: I1209 10:29:41.097285 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89d6bade-9172-4b73-8879-9f23d0834b93/ovsdbserver-nb/0.log" Dec 09 10:29:41 crc kubenswrapper[4786]: I1209 10:29:41.126079 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_89d6bade-9172-4b73-8879-9f23d0834b93/openstack-network-exporter/0.log" Dec 09 10:29:41 crc kubenswrapper[4786]: I1209 10:29:41.360795 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f/ovsdbserver-sb/0.log" Dec 09 10:29:41 crc kubenswrapper[4786]: I1209 10:29:41.463718 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_dd1f4f6b-6ed9-46bc-bf88-bd17fcc6069f/openstack-network-exporter/0.log" Dec 09 10:29:41 crc kubenswrapper[4786]: I1209 10:29:41.726720 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ccfbc9bd6-76jl9_2cec6ba1-ef2c-4cf9-881b-fdc57687c17c/placement-api/0.log" Dec 09 10:29:41 crc kubenswrapper[4786]: I1209 10:29:41.742169 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/init-config-reloader/0.log" Dec 09 10:29:41 crc kubenswrapper[4786]: I1209 10:29:41.918760 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-7ccfbc9bd6-76jl9_2cec6ba1-ef2c-4cf9-881b-fdc57687c17c/placement-log/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.045401 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/init-config-reloader/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.096397 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/prometheus/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.112990 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/config-reloader/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.202147 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_0fec9fc6-5660-4127-95a1-63f6abee883e/thanos-sidecar/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.328736 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f44944dd-abf7-402f-a3d4-93e17d0a760b/setup-container/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.503496 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f44944dd-abf7-402f-a3d4-93e17d0a760b/rabbitmq/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.529391 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f44944dd-abf7-402f-a3d4-93e17d0a760b/setup-container/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.652026 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c5ae8f0-bfa8-4fe2-81c3-289021674179/setup-container/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.834210 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c5ae8f0-bfa8-4fe2-81c3-289021674179/setup-container/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.915478 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_72675e09-5efb-4dc9-bc17-25b93ecf7537/setup-container/0.log" Dec 09 10:29:42 crc kubenswrapper[4786]: I1209 10:29:42.943439 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_6c5ae8f0-bfa8-4fe2-81c3-289021674179/rabbitmq/0.log" Dec 09 10:29:43 crc kubenswrapper[4786]: I1209 10:29:43.150801 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_72675e09-5efb-4dc9-bc17-25b93ecf7537/setup-container/0.log" Dec 09 10:29:43 crc kubenswrapper[4786]: I1209 10:29:43.185487 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_72675e09-5efb-4dc9-bc17-25b93ecf7537/rabbitmq/0.log" Dec 09 10:29:43 crc kubenswrapper[4786]: I1209 10:29:43.288538 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-th8j9_e5211467-fc31-4051-8e46-6b59e77d217b/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:43 crc kubenswrapper[4786]: I1209 10:29:43.538089 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-hmb7n_eb43b8bf-02ae-4d5d-82f6-3262125035f1/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:43 crc kubenswrapper[4786]: I1209 10:29:43.647320 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-7gpqm_c88eeae2-e339-4f55-a0ee-c5fa8e611253/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:43 crc kubenswrapper[4786]: I1209 10:29:43.849896 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-pjlq8_a4285d16-6fd5-45d2-b13c-1ed58d8ca8f1/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:43 crc kubenswrapper[4786]: I1209 10:29:43.965298 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-swwrk_a1e77ab0-8d5d-421a-97df-bde0fa1abdfe/ssh-known-hosts-edpm-deployment/0.log" Dec 09 10:29:44 crc kubenswrapper[4786]: I1209 10:29:44.300841 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-ph8nr_47aa56da-70ed-4ee4-a83c-35c8116a0ec3/swift-ring-rebalance/0.log" Dec 09 10:29:44 crc kubenswrapper[4786]: I1209 10:29:44.340610 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55bfbb9895-lchg7_025e29a5-c1a7-46fe-a47d-4b3248fd6320/proxy-server/0.log" Dec 09 10:29:44 crc kubenswrapper[4786]: I1209 10:29:44.371920 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-55bfbb9895-lchg7_025e29a5-c1a7-46fe-a47d-4b3248fd6320/proxy-httpd/0.log" Dec 09 10:29:44 crc kubenswrapper[4786]: I1209 10:29:44.591000 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/account-auditor/0.log" Dec 09 10:29:44 crc kubenswrapper[4786]: I1209 10:29:44.675756 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/account-reaper/0.log" Dec 09 10:29:44 crc kubenswrapper[4786]: I1209 10:29:44.790322 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/account-replicator/0.log" Dec 09 10:29:44 crc kubenswrapper[4786]: I1209 10:29:44.859393 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/container-auditor/0.log" Dec 09 10:29:44 crc kubenswrapper[4786]: I1209 10:29:44.888784 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/account-server/0.log" Dec 09 10:29:44 crc kubenswrapper[4786]: I1209 10:29:44.981093 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/container-replicator/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.023118 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/container-server/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.086250 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/container-updater/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.170525 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-auditor/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.271941 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-expirer/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.306566 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-replicator/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.323843 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-server/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.387603 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/object-updater/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.502363 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/rsync/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.549973 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_ed655e06-206a-407f-8651-56e042d74cd1/swift-recon-cron/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.650882 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-s9qhv_b1717b33-b022-49ed-94fc-2160247ac3bd/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.800927 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_0112bf44-5116-4b72-a860-4fc091e5dc27/tempest-tests-tempest-tests-runner/0.log" Dec 09 10:29:45 crc kubenswrapper[4786]: I1209 10:29:45.923734 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_0ffa3268-7c4f-4069-bc33-50db10708dce/test-operator-logs-container/0.log" Dec 09 10:29:46 crc kubenswrapper[4786]: I1209 10:29:46.067278 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-kg2wg_32d7a244-874b-45f3-844a-402e668af86d/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 09 10:29:46 crc kubenswrapper[4786]: I1209 10:29:46.932652 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_6c233b45-5e1c-4c8c-a3ba-d71a89838114/watcher-applier/0.log" Dec 09 10:29:47 crc kubenswrapper[4786]: I1209 10:29:47.758061 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_0d0cae5a-5b07-4046-9640-9734ea4e44c4/watcher-api-log/0.log" Dec 09 10:29:48 crc kubenswrapper[4786]: I1209 10:29:48.174111 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_77b48dc5-f201-422e-9983-368555119d75/memcached/0.log" Dec 09 10:29:50 crc kubenswrapper[4786]: I1209 10:29:50.442970 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_61ffd12d-d64c-461b-a3c1-271d523a8de6/watcher-decision-engine/0.log" Dec 09 10:29:51 crc kubenswrapper[4786]: I1209 10:29:51.579848 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_0d0cae5a-5b07-4046-9640-9734ea4e44c4/watcher-api/0.log" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.162641 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq"] Dec 09 10:30:00 crc kubenswrapper[4786]: E1209 10:30:00.163739 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23890e90-7f55-4892-8239-276ad50e4907" containerName="extract-utilities" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.163754 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="23890e90-7f55-4892-8239-276ad50e4907" containerName="extract-utilities" Dec 09 10:30:00 crc kubenswrapper[4786]: E1209 10:30:00.163777 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23890e90-7f55-4892-8239-276ad50e4907" containerName="extract-content" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.163784 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="23890e90-7f55-4892-8239-276ad50e4907" containerName="extract-content" Dec 09 10:30:00 crc kubenswrapper[4786]: E1209 10:30:00.163808 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23890e90-7f55-4892-8239-276ad50e4907" containerName="registry-server" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.163818 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="23890e90-7f55-4892-8239-276ad50e4907" containerName="registry-server" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.164026 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="23890e90-7f55-4892-8239-276ad50e4907" containerName="registry-server" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.164845 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.168251 4786 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.168509 4786 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.177602 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq"] Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.347147 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x66d8\" (UniqueName: \"kubernetes.io/projected/2e49c3b2-af8f-48a2-a90a-ab8237894593-kube-api-access-x66d8\") pod \"collect-profiles-29421270-vjqgq\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.347276 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e49c3b2-af8f-48a2-a90a-ab8237894593-config-volume\") pod \"collect-profiles-29421270-vjqgq\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.347376 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e49c3b2-af8f-48a2-a90a-ab8237894593-secret-volume\") pod \"collect-profiles-29421270-vjqgq\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.449878 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e49c3b2-af8f-48a2-a90a-ab8237894593-config-volume\") pod \"collect-profiles-29421270-vjqgq\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.450182 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e49c3b2-af8f-48a2-a90a-ab8237894593-secret-volume\") pod \"collect-profiles-29421270-vjqgq\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.450378 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x66d8\" (UniqueName: \"kubernetes.io/projected/2e49c3b2-af8f-48a2-a90a-ab8237894593-kube-api-access-x66d8\") pod \"collect-profiles-29421270-vjqgq\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.450989 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e49c3b2-af8f-48a2-a90a-ab8237894593-config-volume\") pod \"collect-profiles-29421270-vjqgq\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.456585 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e49c3b2-af8f-48a2-a90a-ab8237894593-secret-volume\") pod \"collect-profiles-29421270-vjqgq\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.468168 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x66d8\" (UniqueName: \"kubernetes.io/projected/2e49c3b2-af8f-48a2-a90a-ab8237894593-kube-api-access-x66d8\") pod \"collect-profiles-29421270-vjqgq\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:00 crc kubenswrapper[4786]: I1209 10:30:00.494387 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:01 crc kubenswrapper[4786]: I1209 10:30:01.012252 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq"] Dec 09 10:30:01 crc kubenswrapper[4786]: I1209 10:30:01.565206 4786 generic.go:334] "Generic (PLEG): container finished" podID="2e49c3b2-af8f-48a2-a90a-ab8237894593" containerID="8131f9909bad2f93cc3b492e3ae145f2621cc3e60ad26545c8b6f1ea797ffa03" exitCode=0 Dec 09 10:30:01 crc kubenswrapper[4786]: I1209 10:30:01.565259 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" event={"ID":"2e49c3b2-af8f-48a2-a90a-ab8237894593","Type":"ContainerDied","Data":"8131f9909bad2f93cc3b492e3ae145f2621cc3e60ad26545c8b6f1ea797ffa03"} Dec 09 10:30:01 crc kubenswrapper[4786]: I1209 10:30:01.565556 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" event={"ID":"2e49c3b2-af8f-48a2-a90a-ab8237894593","Type":"ContainerStarted","Data":"1e5629b2fcdd03c116a180e12d0094d1204fb3169a3d44dcf32a433360632d75"} Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.032231 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.206784 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x66d8\" (UniqueName: \"kubernetes.io/projected/2e49c3b2-af8f-48a2-a90a-ab8237894593-kube-api-access-x66d8\") pod \"2e49c3b2-af8f-48a2-a90a-ab8237894593\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.207070 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e49c3b2-af8f-48a2-a90a-ab8237894593-secret-volume\") pod \"2e49c3b2-af8f-48a2-a90a-ab8237894593\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.207098 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e49c3b2-af8f-48a2-a90a-ab8237894593-config-volume\") pod \"2e49c3b2-af8f-48a2-a90a-ab8237894593\" (UID: \"2e49c3b2-af8f-48a2-a90a-ab8237894593\") " Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.207967 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e49c3b2-af8f-48a2-a90a-ab8237894593-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e49c3b2-af8f-48a2-a90a-ab8237894593" (UID: "2e49c3b2-af8f-48a2-a90a-ab8237894593"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.212856 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e49c3b2-af8f-48a2-a90a-ab8237894593-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e49c3b2-af8f-48a2-a90a-ab8237894593" (UID: "2e49c3b2-af8f-48a2-a90a-ab8237894593"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.217183 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e49c3b2-af8f-48a2-a90a-ab8237894593-kube-api-access-x66d8" (OuterVolumeSpecName: "kube-api-access-x66d8") pod "2e49c3b2-af8f-48a2-a90a-ab8237894593" (UID: "2e49c3b2-af8f-48a2-a90a-ab8237894593"). InnerVolumeSpecName "kube-api-access-x66d8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.311824 4786 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e49c3b2-af8f-48a2-a90a-ab8237894593-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.312092 4786 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e49c3b2-af8f-48a2-a90a-ab8237894593-config-volume\") on node \"crc\" DevicePath \"\"" Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.312166 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x66d8\" (UniqueName: \"kubernetes.io/projected/2e49c3b2-af8f-48a2-a90a-ab8237894593-kube-api-access-x66d8\") on node \"crc\" DevicePath \"\"" Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.607179 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" event={"ID":"2e49c3b2-af8f-48a2-a90a-ab8237894593","Type":"ContainerDied","Data":"1e5629b2fcdd03c116a180e12d0094d1204fb3169a3d44dcf32a433360632d75"} Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.607231 4786 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e5629b2fcdd03c116a180e12d0094d1204fb3169a3d44dcf32a433360632d75" Dec 09 10:30:03 crc kubenswrapper[4786]: I1209 10:30:03.607243 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29421270-vjqgq" Dec 09 10:30:04 crc kubenswrapper[4786]: I1209 10:30:04.113765 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp"] Dec 09 10:30:04 crc kubenswrapper[4786]: I1209 10:30:04.124100 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29421225-zghqp"] Dec 09 10:30:05 crc kubenswrapper[4786]: I1209 10:30:05.200036 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ddbd8b34-5d98-4bc2-8fe0-79f41393b234" path="/var/lib/kubelet/pods/ddbd8b34-5d98-4bc2-8fe0-79f41393b234/volumes" Dec 09 10:30:07 crc kubenswrapper[4786]: I1209 10:30:07.059879 4786 scope.go:117] "RemoveContainer" containerID="742dc26ce1d658a64df890d0d3809112de15092e3c69b913afd5b8dee764e83f" Dec 09 10:30:13 crc kubenswrapper[4786]: I1209 10:30:13.540494 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/util/0.log" Dec 09 10:30:13 crc kubenswrapper[4786]: I1209 10:30:13.683629 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/util/0.log" Dec 09 10:30:13 crc kubenswrapper[4786]: I1209 10:30:13.718869 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/pull/0.log" Dec 09 10:30:13 crc kubenswrapper[4786]: I1209 10:30:13.735366 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/pull/0.log" Dec 09 10:30:13 crc kubenswrapper[4786]: I1209 10:30:13.924099 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/util/0.log" Dec 09 10:30:13 crc kubenswrapper[4786]: I1209 10:30:13.984788 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/pull/0.log" Dec 09 10:30:13 crc kubenswrapper[4786]: I1209 10:30:13.989303 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_563e5c6436268da7c348fcfe39963b8cd787fa11692ddc9f2134935157pxwcp_41a3cd42-650d-45c1-9664-5c5de59883ed/extract/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.131665 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-pldv5_0ebdf904-eeaa-4d7b-8f51-10e721a91538/kube-rbac-proxy/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.195948 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-w7gzc_405fe0da-3e24-42cd-b73d-9d0cfe700614/kube-rbac-proxy/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.253611 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-5bfbbb859d-pldv5_0ebdf904-eeaa-4d7b-8f51-10e721a91538/manager/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.389405 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-748967c98-w7gzc_405fe0da-3e24-42cd-b73d-9d0cfe700614/manager/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.467400 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-6dlzn_2bd616d0-3367-48bb-94a5-a22302102b89/manager/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.525576 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6788cc6d75-6dlzn_2bd616d0-3367-48bb-94a5-a22302102b89/kube-rbac-proxy/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.708457 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-85fbd69fcd-q9mt5_ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736/kube-rbac-proxy/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.768980 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-85fbd69fcd-q9mt5_ce1c8ba3-ded1-4b8e-81a6-fcbaa81de736/manager/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.830083 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-mft9w_f52a27b2-d045-4a4b-8fe5-0160004d9a5f/kube-rbac-proxy/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.906012 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-698d6fd7d6-mft9w_f52a27b2-d045-4a4b-8fe5-0160004d9a5f/manager/0.log" Dec 09 10:30:14 crc kubenswrapper[4786]: I1209 10:30:14.955823 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-rphwz_2ebe7b51-643e-4700-bf2f-cbe9546ae563/kube-rbac-proxy/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.012907 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-7d5d9fd47f-rphwz_2ebe7b51-643e-4700-bf2f-cbe9546ae563/manager/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.110973 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6c55d8d69b-w52pz_d658b716-de31-47c0-a352-28f6260b0144/kube-rbac-proxy/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.271790 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-kk584_fd1844c2-cd01-475a-b2fa-e49c9223b7b4/kube-rbac-proxy/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.338542 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-6c55d8d69b-w52pz_d658b716-de31-47c0-a352-28f6260b0144/manager/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.349436 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-54485f899-kk584_fd1844c2-cd01-475a-b2fa-e49c9223b7b4/manager/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.459583 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-79cc9d59f5-lfmfw_9b7f6902-b444-48d4-b2d2-7342e62c8811/kube-rbac-proxy/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.634823 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-79cc9d59f5-lfmfw_9b7f6902-b444-48d4-b2d2-7342e62c8811/manager/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.731974 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5cbc8c7f96-9qxcn_12b96437-95ee-4267-8eb2-569b9a93ef8d/kube-rbac-proxy/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.734965 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5cbc8c7f96-9qxcn_12b96437-95ee-4267-8eb2-569b9a93ef8d/manager/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.843384 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-qqwdd_061bb0fd-451d-4d15-b979-a6ea9b833fb1/kube-rbac-proxy/0.log" Dec 09 10:30:15 crc kubenswrapper[4786]: I1209 10:30:15.956128 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-64d7c556cd-qqwdd_061bb0fd-451d-4d15-b979-a6ea9b833fb1/manager/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.010241 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-glr8m_62fae6d8-3c6a-403c-9cc6-463e41a0bbe7/kube-rbac-proxy/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.095595 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-58879495c-glr8m_62fae6d8-3c6a-403c-9cc6-463e41a0bbe7/manager/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.159830 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-rrlrt_e551e183-3965-40da-88e6-bbbcd6e3cbe5/kube-rbac-proxy/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.305497 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-79d658b66d-rrlrt_e551e183-3965-40da-88e6-bbbcd6e3cbe5/manager/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.368285 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-2ql48_c117d831-6ff8-4e04-833a-242c22702cc3/kube-rbac-proxy/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.377541 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-d5fb87cb8-2ql48_c117d831-6ff8-4e04-833a-242c22702cc3/manager/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.551722 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-qkq7f_ac272f60-c2a5-41a3-a48b-e499e7717667/manager/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.559949 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77868f484-qkq7f_ac272f60-c2a5-41a3-a48b-e499e7717667/kube-rbac-proxy/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.718545 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9969bcdf-xb28j_672e5a98-1fd6-4667-9a55-6a84ea13d77c/kube-rbac-proxy/0.log" Dec 09 10:30:16 crc kubenswrapper[4786]: I1209 10:30:16.801055 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-65cff6ddb4-mkfzv_9415679e-cf70-4f02-aaf3-20aa363e9f86/kube-rbac-proxy/0.log" Dec 09 10:30:17 crc kubenswrapper[4786]: I1209 10:30:17.104846 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-p2plv_66100669-25e6-457a-a856-d7f6ee39b124/registry-server/0.log" Dec 09 10:30:17 crc kubenswrapper[4786]: I1209 10:30:17.152235 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-65cff6ddb4-mkfzv_9415679e-cf70-4f02-aaf3-20aa363e9f86/operator/0.log" Dec 09 10:30:17 crc kubenswrapper[4786]: I1209 10:30:17.369954 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-pwfxf_2bc2193d-47f3-470a-a773-db2124fc8351/kube-rbac-proxy/0.log" Dec 09 10:30:17 crc kubenswrapper[4786]: I1209 10:30:17.524040 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-5b67cfc8fb-pwfxf_2bc2193d-47f3-470a-a773-db2124fc8351/manager/0.log" Dec 09 10:30:17 crc kubenswrapper[4786]: I1209 10:30:17.597242 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-4mg9x_cbea13a0-662c-4a51-9cfa-a0904713fc0f/kube-rbac-proxy/0.log" Dec 09 10:30:17 crc kubenswrapper[4786]: I1209 10:30:17.663779 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-867d87977b-4mg9x_cbea13a0-662c-4a51-9cfa-a0904713fc0f/manager/0.log" Dec 09 10:30:17 crc kubenswrapper[4786]: I1209 10:30:17.800196 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-744dg_2d179ee0-ed61-44f8-80e8-622ee7ed3876/operator/0.log" Dec 09 10:30:17 crc kubenswrapper[4786]: I1209 10:30:17.914036 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-8f6687c44-lfcn5_c0263a18-de54-4c70-9ef7-508d86abed06/kube-rbac-proxy/0.log" Dec 09 10:30:17 crc kubenswrapper[4786]: I1209 10:30:17.952118 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-8f6687c44-lfcn5_c0263a18-de54-4c70-9ef7-508d86abed06/manager/0.log" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.147504 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-695797c565-h57cz_2db3dcee-6f5b-487e-b425-ec7be9530815/kube-rbac-proxy/0.log" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.177079 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-9969bcdf-xb28j_672e5a98-1fd6-4667-9a55-6a84ea13d77c/manager/0.log" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.330775 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-bb86466d8-m6pzg_7229805c-3f98-437c-a3fe-b4031a2b7fa6/kube-rbac-proxy/0.log" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.364833 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-695797c565-h57cz_2db3dcee-6f5b-487e-b425-ec7be9530815/manager/0.log" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.403481 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-bb86466d8-m6pzg_7229805c-3f98-437c-a3fe-b4031a2b7fa6/manager/0.log" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.479602 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5d7f5df9d6-kwmgc_c12be72a-ac87-4e8f-a061-b68b3f5cb115/kube-rbac-proxy/0.log" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.569648 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-5d7f5df9d6-kwmgc_c12be72a-ac87-4e8f-a061-b68b3f5cb115/manager/0.log" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.875760 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-75lpg"] Dec 09 10:30:18 crc kubenswrapper[4786]: E1209 10:30:18.876251 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e49c3b2-af8f-48a2-a90a-ab8237894593" containerName="collect-profiles" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.876265 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e49c3b2-af8f-48a2-a90a-ab8237894593" containerName="collect-profiles" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.877538 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e49c3b2-af8f-48a2-a90a-ab8237894593" containerName="collect-profiles" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.879315 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.893358 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-75lpg"] Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.955760 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9hsj\" (UniqueName: \"kubernetes.io/projected/60810dc2-c919-435d-8f3e-e9ac6495f558-kube-api-access-n9hsj\") pod \"certified-operators-75lpg\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.955941 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-utilities\") pod \"certified-operators-75lpg\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:18 crc kubenswrapper[4786]: I1209 10:30:18.955977 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-catalog-content\") pod \"certified-operators-75lpg\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:19 crc kubenswrapper[4786]: I1209 10:30:19.057966 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9hsj\" (UniqueName: \"kubernetes.io/projected/60810dc2-c919-435d-8f3e-e9ac6495f558-kube-api-access-n9hsj\") pod \"certified-operators-75lpg\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:19 crc kubenswrapper[4786]: I1209 10:30:19.058717 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-utilities\") pod \"certified-operators-75lpg\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:19 crc kubenswrapper[4786]: I1209 10:30:19.059167 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-utilities\") pod \"certified-operators-75lpg\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:19 crc kubenswrapper[4786]: I1209 10:30:19.059519 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-catalog-content\") pod \"certified-operators-75lpg\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:19 crc kubenswrapper[4786]: I1209 10:30:19.059256 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-catalog-content\") pod \"certified-operators-75lpg\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:19 crc kubenswrapper[4786]: I1209 10:30:19.096447 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9hsj\" (UniqueName: \"kubernetes.io/projected/60810dc2-c919-435d-8f3e-e9ac6495f558-kube-api-access-n9hsj\") pod \"certified-operators-75lpg\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:19 crc kubenswrapper[4786]: I1209 10:30:19.270509 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:19 crc kubenswrapper[4786]: I1209 10:30:19.821034 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-75lpg"] Dec 09 10:30:20 crc kubenswrapper[4786]: I1209 10:30:20.833295 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75lpg" event={"ID":"60810dc2-c919-435d-8f3e-e9ac6495f558","Type":"ContainerDied","Data":"57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c"} Dec 09 10:30:20 crc kubenswrapper[4786]: I1209 10:30:20.834462 4786 generic.go:334] "Generic (PLEG): container finished" podID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerID="57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c" exitCode=0 Dec 09 10:30:20 crc kubenswrapper[4786]: I1209 10:30:20.834510 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75lpg" event={"ID":"60810dc2-c919-435d-8f3e-e9ac6495f558","Type":"ContainerStarted","Data":"ba337452f1ab2e834661e516e20c6a99629c09d5dd217d5671a89e58c2313681"} Dec 09 10:30:20 crc kubenswrapper[4786]: I1209 10:30:20.835952 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 10:30:22 crc kubenswrapper[4786]: I1209 10:30:22.856104 4786 generic.go:334] "Generic (PLEG): container finished" podID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerID="9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7" exitCode=0 Dec 09 10:30:22 crc kubenswrapper[4786]: I1209 10:30:22.856613 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75lpg" event={"ID":"60810dc2-c919-435d-8f3e-e9ac6495f558","Type":"ContainerDied","Data":"9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7"} Dec 09 10:30:23 crc kubenswrapper[4786]: I1209 10:30:23.867965 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75lpg" event={"ID":"60810dc2-c919-435d-8f3e-e9ac6495f558","Type":"ContainerStarted","Data":"8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147"} Dec 09 10:30:23 crc kubenswrapper[4786]: I1209 10:30:23.894450 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-75lpg" podStartSLOduration=3.45761484 podStartE2EDuration="5.894406484s" podCreationTimestamp="2025-12-09 10:30:18 +0000 UTC" firstStartedPulling="2025-12-09 10:30:20.835639817 +0000 UTC m=+6386.719261043" lastFinishedPulling="2025-12-09 10:30:23.272431461 +0000 UTC m=+6389.156052687" observedRunningTime="2025-12-09 10:30:23.892375185 +0000 UTC m=+6389.775996441" watchObservedRunningTime="2025-12-09 10:30:23.894406484 +0000 UTC m=+6389.778027720" Dec 09 10:30:29 crc kubenswrapper[4786]: I1209 10:30:29.282228 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:29 crc kubenswrapper[4786]: I1209 10:30:29.283246 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:29 crc kubenswrapper[4786]: I1209 10:30:29.334592 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:29 crc kubenswrapper[4786]: I1209 10:30:29.970644 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:30 crc kubenswrapper[4786]: I1209 10:30:30.027624 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-75lpg"] Dec 09 10:30:31 crc kubenswrapper[4786]: I1209 10:30:31.941276 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-75lpg" podUID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerName="registry-server" containerID="cri-o://8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147" gracePeriod=2 Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.455000 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.555413 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9hsj\" (UniqueName: \"kubernetes.io/projected/60810dc2-c919-435d-8f3e-e9ac6495f558-kube-api-access-n9hsj\") pod \"60810dc2-c919-435d-8f3e-e9ac6495f558\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.555701 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-catalog-content\") pod \"60810dc2-c919-435d-8f3e-e9ac6495f558\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.555835 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-utilities\") pod \"60810dc2-c919-435d-8f3e-e9ac6495f558\" (UID: \"60810dc2-c919-435d-8f3e-e9ac6495f558\") " Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.558830 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-utilities" (OuterVolumeSpecName: "utilities") pod "60810dc2-c919-435d-8f3e-e9ac6495f558" (UID: "60810dc2-c919-435d-8f3e-e9ac6495f558"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.566078 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60810dc2-c919-435d-8f3e-e9ac6495f558-kube-api-access-n9hsj" (OuterVolumeSpecName: "kube-api-access-n9hsj") pod "60810dc2-c919-435d-8f3e-e9ac6495f558" (UID: "60810dc2-c919-435d-8f3e-e9ac6495f558"). InnerVolumeSpecName "kube-api-access-n9hsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.610459 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "60810dc2-c919-435d-8f3e-e9ac6495f558" (UID: "60810dc2-c919-435d-8f3e-e9ac6495f558"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.659678 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.659718 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/60810dc2-c919-435d-8f3e-e9ac6495f558-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.659731 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9hsj\" (UniqueName: \"kubernetes.io/projected/60810dc2-c919-435d-8f3e-e9ac6495f558-kube-api-access-n9hsj\") on node \"crc\" DevicePath \"\"" Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.967318 4786 generic.go:334] "Generic (PLEG): container finished" podID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerID="8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147" exitCode=0 Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.967439 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-75lpg" Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.967442 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75lpg" event={"ID":"60810dc2-c919-435d-8f3e-e9ac6495f558","Type":"ContainerDied","Data":"8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147"} Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.967524 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-75lpg" event={"ID":"60810dc2-c919-435d-8f3e-e9ac6495f558","Type":"ContainerDied","Data":"ba337452f1ab2e834661e516e20c6a99629c09d5dd217d5671a89e58c2313681"} Dec 09 10:30:32 crc kubenswrapper[4786]: I1209 10:30:32.967548 4786 scope.go:117] "RemoveContainer" containerID="8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147" Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.024717 4786 scope.go:117] "RemoveContainer" containerID="9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7" Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.030465 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-75lpg"] Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.055548 4786 scope.go:117] "RemoveContainer" containerID="57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c" Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.064042 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-75lpg"] Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.119293 4786 scope.go:117] "RemoveContainer" containerID="8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147" Dec 09 10:30:33 crc kubenswrapper[4786]: E1209 10:30:33.119719 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147\": container with ID starting with 8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147 not found: ID does not exist" containerID="8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147" Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.119753 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147"} err="failed to get container status \"8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147\": rpc error: code = NotFound desc = could not find container \"8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147\": container with ID starting with 8f7786f0988fa98c6d1cd3e9eb5389aefd96a2df120b21effd62f73c02fee147 not found: ID does not exist" Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.119778 4786 scope.go:117] "RemoveContainer" containerID="9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7" Dec 09 10:30:33 crc kubenswrapper[4786]: E1209 10:30:33.120084 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7\": container with ID starting with 9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7 not found: ID does not exist" containerID="9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7" Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.120129 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7"} err="failed to get container status \"9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7\": rpc error: code = NotFound desc = could not find container \"9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7\": container with ID starting with 9e4d64a3fa381cc91340eeebc452510d15aeab9937e64c63c719bfb0873b0aa7 not found: ID does not exist" Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.120157 4786 scope.go:117] "RemoveContainer" containerID="57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c" Dec 09 10:30:33 crc kubenswrapper[4786]: E1209 10:30:33.120537 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c\": container with ID starting with 57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c not found: ID does not exist" containerID="57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c" Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.120572 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c"} err="failed to get container status \"57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c\": rpc error: code = NotFound desc = could not find container \"57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c\": container with ID starting with 57688160d0b7e6ddd828cab0965c5b89638599db69fa9b7826a63f328ea3fe1c not found: ID does not exist" Dec 09 10:30:33 crc kubenswrapper[4786]: I1209 10:30:33.199775 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60810dc2-c919-435d-8f3e-e9ac6495f558" path="/var/lib/kubelet/pods/60810dc2-c919-435d-8f3e-e9ac6495f558/volumes" Dec 09 10:30:35 crc kubenswrapper[4786]: I1209 10:30:35.843825 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wnjtx_667ac238-96a3-4f57-b308-d4d5693d40f2/control-plane-machine-set-operator/0.log" Dec 09 10:30:36 crc kubenswrapper[4786]: I1209 10:30:36.102672 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hp2jj_8101c95a-1629-4b71-b12e-0fa374c9b09a/kube-rbac-proxy/0.log" Dec 09 10:30:36 crc kubenswrapper[4786]: I1209 10:30:36.142137 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hp2jj_8101c95a-1629-4b71-b12e-0fa374c9b09a/machine-api-operator/0.log" Dec 09 10:30:48 crc kubenswrapper[4786]: I1209 10:30:48.729530 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-mhk9l_e590bb55-a521-4368-b048-ebc34e6dc46c/cert-manager-controller/0.log" Dec 09 10:30:48 crc kubenswrapper[4786]: I1209 10:30:48.837813 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-g22j7_4a583e5a-0f3b-496b-89d5-fe79f697b730/cert-manager-cainjector/0.log" Dec 09 10:30:48 crc kubenswrapper[4786]: I1209 10:30:48.911102 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-57g7f_5b2baefe-3aa8-48ec-b66a-173a0eb33c22/cert-manager-webhook/0.log" Dec 09 10:31:02 crc kubenswrapper[4786]: I1209 10:31:02.016775 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-z52fp_21a68ddc-31de-4083-ac88-bdf6ffd0afa7/nmstate-console-plugin/0.log" Dec 09 10:31:02 crc kubenswrapper[4786]: I1209 10:31:02.197296 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-cgcz2_d6bacb3d-0915-4228-979a-ea9b6d283ff7/nmstate-handler/0.log" Dec 09 10:31:02 crc kubenswrapper[4786]: I1209 10:31:02.278265 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-679bk_3d5f00bd-8538-4255-8012-736caf10840a/nmstate-metrics/0.log" Dec 09 10:31:02 crc kubenswrapper[4786]: I1209 10:31:02.287114 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-679bk_3d5f00bd-8538-4255-8012-736caf10840a/kube-rbac-proxy/0.log" Dec 09 10:31:02 crc kubenswrapper[4786]: I1209 10:31:02.453723 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-fvtvt_79114f82-3f7e-40ea-b197-051c986d3070/nmstate-operator/0.log" Dec 09 10:31:02 crc kubenswrapper[4786]: I1209 10:31:02.542783 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-gszq4_cd6cbdfd-eed1-4b1e-9ec3-f0c83ea2811e/nmstate-webhook/0.log" Dec 09 10:31:07 crc kubenswrapper[4786]: I1209 10:31:07.151205 4786 scope.go:117] "RemoveContainer" containerID="1b46fa403e92dd0fdd3f15b7692a329b1a07f5aa969ba01d0eb292e1245941cb" Dec 09 10:31:07 crc kubenswrapper[4786]: I1209 10:31:07.174031 4786 scope.go:117] "RemoveContainer" containerID="575d73ba2b095453a0ff9ad23f94928a1387d41c9874eaf31774f322b0300730" Dec 09 10:31:07 crc kubenswrapper[4786]: I1209 10:31:07.203829 4786 scope.go:117] "RemoveContainer" containerID="2e6ab483dfebab27a0859692598c84f5238bec0395fc8fed31da11f435186c8c" Dec 09 10:31:17 crc kubenswrapper[4786]: I1209 10:31:17.706533 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-dtbr4_2ba38124-9926-44fd-b5c5-2adb47fd814a/kube-rbac-proxy/0.log" Dec 09 10:31:17 crc kubenswrapper[4786]: I1209 10:31:17.864864 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-dtbr4_2ba38124-9926-44fd-b5c5-2adb47fd814a/controller/0.log" Dec 09 10:31:17 crc kubenswrapper[4786]: I1209 10:31:17.933166 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-frr-files/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.171767 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-metrics/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.189185 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-reloader/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.196948 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-frr-files/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.205801 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-reloader/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.382950 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-reloader/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.398337 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-metrics/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.405803 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-frr-files/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.429667 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-metrics/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.603384 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-frr-files/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.609116 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-metrics/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.642740 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/cp-reloader/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.645938 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/controller/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.853013 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/frr-metrics/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.878586 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/kube-rbac-proxy/0.log" Dec 09 10:31:18 crc kubenswrapper[4786]: I1209 10:31:18.929895 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/kube-rbac-proxy-frr/0.log" Dec 09 10:31:19 crc kubenswrapper[4786]: I1209 10:31:19.098561 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/reloader/0.log" Dec 09 10:31:19 crc kubenswrapper[4786]: I1209 10:31:19.155830 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-kcwjn_e7177936-18a2-4469-bf7b-cd9db745d93f/frr-k8s-webhook-server/0.log" Dec 09 10:31:19 crc kubenswrapper[4786]: I1209 10:31:19.384788 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7bd64dc485-knxdp_2415f03a-5796-4063-aa38-791dc0a76fec/manager/0.log" Dec 09 10:31:19 crc kubenswrapper[4786]: I1209 10:31:19.634567 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5786b6d7bd-6ntsj_b90cfb1a-dd9c-4877-81a6-c5fd3eb60c21/webhook-server/0.log" Dec 09 10:31:19 crc kubenswrapper[4786]: I1209 10:31:19.728363 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fpvm7_8a0660c9-5ef5-4ed7-a304-3690e32fb830/kube-rbac-proxy/0.log" Dec 09 10:31:20 crc kubenswrapper[4786]: I1209 10:31:20.605363 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-fpvm7_8a0660c9-5ef5-4ed7-a304-3690e32fb830/speaker/0.log" Dec 09 10:31:20 crc kubenswrapper[4786]: I1209 10:31:20.834502 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-hq5kf_6e123ec9-00ea-466d-b5f6-79cad587a2cc/frr/0.log" Dec 09 10:31:24 crc kubenswrapper[4786]: I1209 10:31:24.990075 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:31:24 crc kubenswrapper[4786]: I1209 10:31:24.990539 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:31:33 crc kubenswrapper[4786]: I1209 10:31:33.628184 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/util/0.log" Dec 09 10:31:33 crc kubenswrapper[4786]: I1209 10:31:33.821690 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/pull/0.log" Dec 09 10:31:33 crc kubenswrapper[4786]: I1209 10:31:33.829843 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/util/0.log" Dec 09 10:31:33 crc kubenswrapper[4786]: I1209 10:31:33.839100 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/pull/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.043207 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/util/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.048043 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/extract/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.064114 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fh7gq6_551ed98a-59ee-48a6-aec6-e02f7889d395/pull/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.215326 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/util/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.373510 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/util/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.384722 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/pull/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.400725 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/pull/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.593155 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/util/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.629504 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/extract/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.632031 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210bm2tv_6ecba051-bc5a-42a9-b4de-bf033c4f5491/pull/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.774451 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/util/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.952540 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/pull/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.976217 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/pull/0.log" Dec 09 10:31:34 crc kubenswrapper[4786]: I1209 10:31:34.988217 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/util/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.127566 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/util/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.131479 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/pull/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.142258 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83x54px_de7189e4-4dca-44a2-95d6-520828fc914f/extract/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.297758 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-utilities/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.466752 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-content/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.469210 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-utilities/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.487910 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-content/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.645339 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-utilities/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.661845 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/extract-content/0.log" Dec 09 10:31:35 crc kubenswrapper[4786]: I1209 10:31:35.880925 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-utilities/0.log" Dec 09 10:31:36 crc kubenswrapper[4786]: I1209 10:31:36.055204 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-utilities/0.log" Dec 09 10:31:36 crc kubenswrapper[4786]: I1209 10:31:36.159355 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-content/0.log" Dec 09 10:31:36 crc kubenswrapper[4786]: I1209 10:31:36.175558 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-content/0.log" Dec 09 10:31:36 crc kubenswrapper[4786]: I1209 10:31:36.491651 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-utilities/0.log" Dec 09 10:31:36 crc kubenswrapper[4786]: I1209 10:31:36.532199 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-49cfg_eebcd979-a1cb-4994-ba4a-cadfaeb57401/registry-server/0.log" Dec 09 10:31:36 crc kubenswrapper[4786]: I1209 10:31:36.567963 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/extract-content/0.log" Dec 09 10:31:36 crc kubenswrapper[4786]: I1209 10:31:36.772815 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-q75q9_a62ae8ec-1904-4074-9c5d-76d6bde47df8/marketplace-operator/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.008524 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5gct7_b2fe83fb-45d7-471c-a4d1-f723dd6f2bb9/registry-server/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.021584 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-utilities/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.204381 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-content/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.212323 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-utilities/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.228641 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-content/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.410946 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-utilities/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.528114 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-utilities/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.528228 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/extract-content/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.612278 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-5lt4q_d0a09b57-1b41-45ff-9d41-aa0c5d1b4beb/registry-server/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.702394 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-content/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.737855 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-content/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.759689 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-utilities/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.938821 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-content/0.log" Dec 09 10:31:37 crc kubenswrapper[4786]: I1209 10:31:37.966719 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/extract-utilities/0.log" Dec 09 10:31:38 crc kubenswrapper[4786]: I1209 10:31:38.707249 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-s4hdj_011c88f1-1d58-4ad3-8835-29020b9f4e8d/registry-server/0.log" Dec 09 10:31:51 crc kubenswrapper[4786]: I1209 10:31:51.305209 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-hwspd_5642938b-acf3-4128-83bb-ef2beeb1d85c/prometheus-operator/0.log" Dec 09 10:31:51 crc kubenswrapper[4786]: I1209 10:31:51.484395 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd87d88d-glsks_68535e7a-c972-4054-8849-58dedcf84cd0/prometheus-operator-admission-webhook/0.log" Dec 09 10:31:51 crc kubenswrapper[4786]: I1209 10:31:51.546961 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-64dd87d88d-j4dpg_2c7d26aa-45ef-471d-bb48-671366e5928a/prometheus-operator-admission-webhook/0.log" Dec 09 10:31:51 crc kubenswrapper[4786]: I1209 10:31:51.772656 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-kgqqn_1d1ef0df-f7b0-4499-b5c3-f0952d78f097/operator/0.log" Dec 09 10:31:51 crc kubenswrapper[4786]: I1209 10:31:51.791068 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-2dt2t_19919157-d502-47f5-9ea6-27f27a0b6742/perses-operator/0.log" Dec 09 10:31:54 crc kubenswrapper[4786]: I1209 10:31:54.988802 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:31:54 crc kubenswrapper[4786]: I1209 10:31:54.989137 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:32:24 crc kubenswrapper[4786]: I1209 10:32:24.988766 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:32:24 crc kubenswrapper[4786]: I1209 10:32:24.989247 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:32:24 crc kubenswrapper[4786]: I1209 10:32:24.989288 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 10:32:24 crc kubenswrapper[4786]: I1209 10:32:24.989804 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e017fad71b8ce5488d1f14545c2c965b6f7d76312d52fe6cfdbc755f86bd6aa2"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 10:32:24 crc kubenswrapper[4786]: I1209 10:32:24.989845 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://e017fad71b8ce5488d1f14545c2c965b6f7d76312d52fe6cfdbc755f86bd6aa2" gracePeriod=600 Dec 09 10:32:26 crc kubenswrapper[4786]: I1209 10:32:26.086306 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="e017fad71b8ce5488d1f14545c2c965b6f7d76312d52fe6cfdbc755f86bd6aa2" exitCode=0 Dec 09 10:32:26 crc kubenswrapper[4786]: I1209 10:32:26.086843 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"e017fad71b8ce5488d1f14545c2c965b6f7d76312d52fe6cfdbc755f86bd6aa2"} Dec 09 10:32:26 crc kubenswrapper[4786]: I1209 10:32:26.086878 4786 scope.go:117] "RemoveContainer" containerID="20ecfe54f5ccfb14285d4fec242fde22ae60896664f5a23bdfa24a734964b062" Dec 09 10:32:27 crc kubenswrapper[4786]: I1209 10:32:27.100164 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerStarted","Data":"ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b"} Dec 09 10:33:19 crc kubenswrapper[4786]: I1209 10:33:19.923644 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c85cn"] Dec 09 10:33:19 crc kubenswrapper[4786]: E1209 10:33:19.924894 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerName="extract-utilities" Dec 09 10:33:19 crc kubenswrapper[4786]: I1209 10:33:19.924910 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerName="extract-utilities" Dec 09 10:33:19 crc kubenswrapper[4786]: E1209 10:33:19.924926 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerName="extract-content" Dec 09 10:33:19 crc kubenswrapper[4786]: I1209 10:33:19.924933 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerName="extract-content" Dec 09 10:33:19 crc kubenswrapper[4786]: E1209 10:33:19.924958 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerName="registry-server" Dec 09 10:33:19 crc kubenswrapper[4786]: I1209 10:33:19.924966 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerName="registry-server" Dec 09 10:33:19 crc kubenswrapper[4786]: I1209 10:33:19.925235 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="60810dc2-c919-435d-8f3e-e9ac6495f558" containerName="registry-server" Dec 09 10:33:19 crc kubenswrapper[4786]: I1209 10:33:19.927197 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:19 crc kubenswrapper[4786]: I1209 10:33:19.941282 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c85cn"] Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.098041 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-catalog-content\") pod \"redhat-operators-c85cn\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.098145 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-utilities\") pod \"redhat-operators-c85cn\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.098373 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxcmk\" (UniqueName: \"kubernetes.io/projected/5be6d576-a77d-4848-a0a1-726ba9261fef-kube-api-access-cxcmk\") pod \"redhat-operators-c85cn\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.200100 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-catalog-content\") pod \"redhat-operators-c85cn\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.200185 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-utilities\") pod \"redhat-operators-c85cn\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.200575 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-catalog-content\") pod \"redhat-operators-c85cn\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.200618 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-utilities\") pod \"redhat-operators-c85cn\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.200811 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxcmk\" (UniqueName: \"kubernetes.io/projected/5be6d576-a77d-4848-a0a1-726ba9261fef-kube-api-access-cxcmk\") pod \"redhat-operators-c85cn\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.223395 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxcmk\" (UniqueName: \"kubernetes.io/projected/5be6d576-a77d-4848-a0a1-726ba9261fef-kube-api-access-cxcmk\") pod \"redhat-operators-c85cn\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.280819 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:20 crc kubenswrapper[4786]: I1209 10:33:20.798171 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c85cn"] Dec 09 10:33:21 crc kubenswrapper[4786]: I1209 10:33:21.705063 4786 generic.go:334] "Generic (PLEG): container finished" podID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerID="f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f" exitCode=0 Dec 09 10:33:21 crc kubenswrapper[4786]: I1209 10:33:21.705168 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c85cn" event={"ID":"5be6d576-a77d-4848-a0a1-726ba9261fef","Type":"ContainerDied","Data":"f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f"} Dec 09 10:33:21 crc kubenswrapper[4786]: I1209 10:33:21.705545 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c85cn" event={"ID":"5be6d576-a77d-4848-a0a1-726ba9261fef","Type":"ContainerStarted","Data":"43b3493d4247e9ed13e94857b860be9c76de3b89ec273705f52f60ab60b4d694"} Dec 09 10:33:22 crc kubenswrapper[4786]: I1209 10:33:22.726282 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c85cn" event={"ID":"5be6d576-a77d-4848-a0a1-726ba9261fef","Type":"ContainerStarted","Data":"19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135"} Dec 09 10:33:24 crc kubenswrapper[4786]: I1209 10:33:24.762160 4786 generic.go:334] "Generic (PLEG): container finished" podID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerID="19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135" exitCode=0 Dec 09 10:33:24 crc kubenswrapper[4786]: I1209 10:33:24.762847 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c85cn" event={"ID":"5be6d576-a77d-4848-a0a1-726ba9261fef","Type":"ContainerDied","Data":"19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135"} Dec 09 10:33:25 crc kubenswrapper[4786]: I1209 10:33:25.777490 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c85cn" event={"ID":"5be6d576-a77d-4848-a0a1-726ba9261fef","Type":"ContainerStarted","Data":"901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485"} Dec 09 10:33:25 crc kubenswrapper[4786]: I1209 10:33:25.800194 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c85cn" podStartSLOduration=3.122384475 podStartE2EDuration="6.800170703s" podCreationTimestamp="2025-12-09 10:33:19 +0000 UTC" firstStartedPulling="2025-12-09 10:33:21.707393477 +0000 UTC m=+6567.591014703" lastFinishedPulling="2025-12-09 10:33:25.385179705 +0000 UTC m=+6571.268800931" observedRunningTime="2025-12-09 10:33:25.800070581 +0000 UTC m=+6571.683691817" watchObservedRunningTime="2025-12-09 10:33:25.800170703 +0000 UTC m=+6571.683791919" Dec 09 10:33:30 crc kubenswrapper[4786]: I1209 10:33:30.282231 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:30 crc kubenswrapper[4786]: I1209 10:33:30.282893 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:31 crc kubenswrapper[4786]: I1209 10:33:31.340694 4786 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c85cn" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerName="registry-server" probeResult="failure" output=< Dec 09 10:33:31 crc kubenswrapper[4786]: timeout: failed to connect service ":50051" within 1s Dec 09 10:33:31 crc kubenswrapper[4786]: > Dec 09 10:33:40 crc kubenswrapper[4786]: I1209 10:33:40.329201 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:40 crc kubenswrapper[4786]: I1209 10:33:40.406259 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:40 crc kubenswrapper[4786]: I1209 10:33:40.571022 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c85cn"] Dec 09 10:33:41 crc kubenswrapper[4786]: I1209 10:33:41.936904 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c85cn" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerName="registry-server" containerID="cri-o://901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485" gracePeriod=2 Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.457842 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.477731 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxcmk\" (UniqueName: \"kubernetes.io/projected/5be6d576-a77d-4848-a0a1-726ba9261fef-kube-api-access-cxcmk\") pod \"5be6d576-a77d-4848-a0a1-726ba9261fef\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.477780 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-utilities\") pod \"5be6d576-a77d-4848-a0a1-726ba9261fef\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.477831 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-catalog-content\") pod \"5be6d576-a77d-4848-a0a1-726ba9261fef\" (UID: \"5be6d576-a77d-4848-a0a1-726ba9261fef\") " Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.478888 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-utilities" (OuterVolumeSpecName: "utilities") pod "5be6d576-a77d-4848-a0a1-726ba9261fef" (UID: "5be6d576-a77d-4848-a0a1-726ba9261fef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.485130 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5be6d576-a77d-4848-a0a1-726ba9261fef-kube-api-access-cxcmk" (OuterVolumeSpecName: "kube-api-access-cxcmk") pod "5be6d576-a77d-4848-a0a1-726ba9261fef" (UID: "5be6d576-a77d-4848-a0a1-726ba9261fef"). InnerVolumeSpecName "kube-api-access-cxcmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.580014 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxcmk\" (UniqueName: \"kubernetes.io/projected/5be6d576-a77d-4848-a0a1-726ba9261fef-kube-api-access-cxcmk\") on node \"crc\" DevicePath \"\"" Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.580056 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.609814 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5be6d576-a77d-4848-a0a1-726ba9261fef" (UID: "5be6d576-a77d-4848-a0a1-726ba9261fef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.682593 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5be6d576-a77d-4848-a0a1-726ba9261fef-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.950648 4786 generic.go:334] "Generic (PLEG): container finished" podID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerID="901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485" exitCode=0 Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.950702 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c85cn" Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.950711 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c85cn" event={"ID":"5be6d576-a77d-4848-a0a1-726ba9261fef","Type":"ContainerDied","Data":"901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485"} Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.950758 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c85cn" event={"ID":"5be6d576-a77d-4848-a0a1-726ba9261fef","Type":"ContainerDied","Data":"43b3493d4247e9ed13e94857b860be9c76de3b89ec273705f52f60ab60b4d694"} Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.950787 4786 scope.go:117] "RemoveContainer" containerID="901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485" Dec 09 10:33:42 crc kubenswrapper[4786]: I1209 10:33:42.981606 4786 scope.go:117] "RemoveContainer" containerID="19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135" Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.002479 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c85cn"] Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.014008 4786 scope.go:117] "RemoveContainer" containerID="f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f" Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.024303 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c85cn"] Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.090115 4786 scope.go:117] "RemoveContainer" containerID="901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485" Dec 09 10:33:43 crc kubenswrapper[4786]: E1209 10:33:43.090740 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485\": container with ID starting with 901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485 not found: ID does not exist" containerID="901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485" Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.090797 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485"} err="failed to get container status \"901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485\": rpc error: code = NotFound desc = could not find container \"901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485\": container with ID starting with 901d27b39902865ae765e60b8452fe388904c64ecd12d869b2eb34adfdd3e485 not found: ID does not exist" Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.090829 4786 scope.go:117] "RemoveContainer" containerID="19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135" Dec 09 10:33:43 crc kubenswrapper[4786]: E1209 10:33:43.091283 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135\": container with ID starting with 19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135 not found: ID does not exist" containerID="19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135" Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.091326 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135"} err="failed to get container status \"19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135\": rpc error: code = NotFound desc = could not find container \"19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135\": container with ID starting with 19e2500c2a7889fe1be6b0219919d023f2ea013c62a48f3b6f736fd2e0b39135 not found: ID does not exist" Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.091355 4786 scope.go:117] "RemoveContainer" containerID="f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f" Dec 09 10:33:43 crc kubenswrapper[4786]: E1209 10:33:43.091657 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f\": container with ID starting with f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f not found: ID does not exist" containerID="f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f" Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.091708 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f"} err="failed to get container status \"f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f\": rpc error: code = NotFound desc = could not find container \"f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f\": container with ID starting with f83b39aae57b37f3b53c49cef1fea52e79c3f2c32e62cc699883412e7d02dc7f not found: ID does not exist" Dec 09 10:33:43 crc kubenswrapper[4786]: I1209 10:33:43.204157 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" path="/var/lib/kubelet/pods/5be6d576-a77d-4848-a0a1-726ba9261fef/volumes" Dec 09 10:34:03 crc kubenswrapper[4786]: I1209 10:34:03.197303 4786 generic.go:334] "Generic (PLEG): container finished" podID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" containerID="dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f" exitCode=0 Dec 09 10:34:03 crc kubenswrapper[4786]: I1209 10:34:03.204950 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-bqjfr/must-gather-llsjs" event={"ID":"3013a14f-cbf4-44a7-9b6c-59999fcb053d","Type":"ContainerDied","Data":"dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f"} Dec 09 10:34:03 crc kubenswrapper[4786]: I1209 10:34:03.206299 4786 scope.go:117] "RemoveContainer" containerID="dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f" Dec 09 10:34:03 crc kubenswrapper[4786]: I1209 10:34:03.867816 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bqjfr_must-gather-llsjs_3013a14f-cbf4-44a7-9b6c-59999fcb053d/gather/0.log" Dec 09 10:34:15 crc kubenswrapper[4786]: I1209 10:34:15.464574 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-bqjfr/must-gather-llsjs"] Dec 09 10:34:15 crc kubenswrapper[4786]: I1209 10:34:15.465387 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-bqjfr/must-gather-llsjs" podUID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" containerName="copy" containerID="cri-o://813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4" gracePeriod=2 Dec 09 10:34:15 crc kubenswrapper[4786]: I1209 10:34:15.481831 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-bqjfr/must-gather-llsjs"] Dec 09 10:34:15 crc kubenswrapper[4786]: I1209 10:34:15.940198 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bqjfr_must-gather-llsjs_3013a14f-cbf4-44a7-9b6c-59999fcb053d/copy/0.log" Dec 09 10:34:15 crc kubenswrapper[4786]: I1209 10:34:15.941060 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.070692 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9v7c\" (UniqueName: \"kubernetes.io/projected/3013a14f-cbf4-44a7-9b6c-59999fcb053d-kube-api-access-w9v7c\") pod \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\" (UID: \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\") " Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.070937 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3013a14f-cbf4-44a7-9b6c-59999fcb053d-must-gather-output\") pod \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\" (UID: \"3013a14f-cbf4-44a7-9b6c-59999fcb053d\") " Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.076933 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3013a14f-cbf4-44a7-9b6c-59999fcb053d-kube-api-access-w9v7c" (OuterVolumeSpecName: "kube-api-access-w9v7c") pod "3013a14f-cbf4-44a7-9b6c-59999fcb053d" (UID: "3013a14f-cbf4-44a7-9b6c-59999fcb053d"). InnerVolumeSpecName "kube-api-access-w9v7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.174215 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9v7c\" (UniqueName: \"kubernetes.io/projected/3013a14f-cbf4-44a7-9b6c-59999fcb053d-kube-api-access-w9v7c\") on node \"crc\" DevicePath \"\"" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.255704 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3013a14f-cbf4-44a7-9b6c-59999fcb053d-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "3013a14f-cbf4-44a7-9b6c-59999fcb053d" (UID: "3013a14f-cbf4-44a7-9b6c-59999fcb053d"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.277351 4786 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/3013a14f-cbf4-44a7-9b6c-59999fcb053d-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.354188 4786 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-bqjfr_must-gather-llsjs_3013a14f-cbf4-44a7-9b6c-59999fcb053d/copy/0.log" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.354791 4786 generic.go:334] "Generic (PLEG): container finished" podID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" containerID="813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4" exitCode=143 Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.354851 4786 scope.go:117] "RemoveContainer" containerID="813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.354910 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-bqjfr/must-gather-llsjs" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.373752 4786 scope.go:117] "RemoveContainer" containerID="dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.418813 4786 scope.go:117] "RemoveContainer" containerID="813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4" Dec 09 10:34:16 crc kubenswrapper[4786]: E1209 10:34:16.419150 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4\": container with ID starting with 813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4 not found: ID does not exist" containerID="813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.419180 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4"} err="failed to get container status \"813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4\": rpc error: code = NotFound desc = could not find container \"813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4\": container with ID starting with 813202d7532bb903417efc7442d33f2f85e9406bfc0fc1a1219633863d516ca4 not found: ID does not exist" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.419204 4786 scope.go:117] "RemoveContainer" containerID="dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f" Dec 09 10:34:16 crc kubenswrapper[4786]: E1209 10:34:16.419440 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f\": container with ID starting with dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f not found: ID does not exist" containerID="dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f" Dec 09 10:34:16 crc kubenswrapper[4786]: I1209 10:34:16.419468 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f"} err="failed to get container status \"dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f\": rpc error: code = NotFound desc = could not find container \"dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f\": container with ID starting with dd1b5ec9d0cd9a350e71121137e467e98457892d934113880922337bd5f0d34f not found: ID does not exist" Dec 09 10:34:17 crc kubenswrapper[4786]: I1209 10:34:17.199937 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" path="/var/lib/kubelet/pods/3013a14f-cbf4-44a7-9b6c-59999fcb053d/volumes" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.515259 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sqbdf"] Dec 09 10:34:54 crc kubenswrapper[4786]: E1209 10:34:54.516396 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" containerName="copy" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.516412 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" containerName="copy" Dec 09 10:34:54 crc kubenswrapper[4786]: E1209 10:34:54.516445 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" containerName="gather" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.516452 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" containerName="gather" Dec 09 10:34:54 crc kubenswrapper[4786]: E1209 10:34:54.516474 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerName="extract-utilities" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.516482 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerName="extract-utilities" Dec 09 10:34:54 crc kubenswrapper[4786]: E1209 10:34:54.516507 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerName="extract-content" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.516517 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerName="extract-content" Dec 09 10:34:54 crc kubenswrapper[4786]: E1209 10:34:54.516538 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerName="registry-server" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.516545 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerName="registry-server" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.516780 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" containerName="gather" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.516797 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="5be6d576-a77d-4848-a0a1-726ba9261fef" containerName="registry-server" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.516813 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="3013a14f-cbf4-44a7-9b6c-59999fcb053d" containerName="copy" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.518757 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.533737 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqbdf"] Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.579368 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz5lq\" (UniqueName: \"kubernetes.io/projected/628e713d-dcb3-4ef8-80f3-28c131edbd1c-kube-api-access-gz5lq\") pod \"community-operators-sqbdf\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.579437 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-utilities\") pod \"community-operators-sqbdf\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.579965 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-catalog-content\") pod \"community-operators-sqbdf\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.690066 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz5lq\" (UniqueName: \"kubernetes.io/projected/628e713d-dcb3-4ef8-80f3-28c131edbd1c-kube-api-access-gz5lq\") pod \"community-operators-sqbdf\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.690149 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-utilities\") pod \"community-operators-sqbdf\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.690342 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-catalog-content\") pod \"community-operators-sqbdf\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.691036 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-catalog-content\") pod \"community-operators-sqbdf\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.691218 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-utilities\") pod \"community-operators-sqbdf\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.710856 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz5lq\" (UniqueName: \"kubernetes.io/projected/628e713d-dcb3-4ef8-80f3-28c131edbd1c-kube-api-access-gz5lq\") pod \"community-operators-sqbdf\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.862538 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.989208 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:34:54 crc kubenswrapper[4786]: I1209 10:34:54.989488 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:34:55 crc kubenswrapper[4786]: I1209 10:34:55.428359 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sqbdf"] Dec 09 10:34:55 crc kubenswrapper[4786]: I1209 10:34:55.839016 4786 generic.go:334] "Generic (PLEG): container finished" podID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerID="f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4" exitCode=0 Dec 09 10:34:55 crc kubenswrapper[4786]: I1209 10:34:55.839134 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqbdf" event={"ID":"628e713d-dcb3-4ef8-80f3-28c131edbd1c","Type":"ContainerDied","Data":"f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4"} Dec 09 10:34:55 crc kubenswrapper[4786]: I1209 10:34:55.839275 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqbdf" event={"ID":"628e713d-dcb3-4ef8-80f3-28c131edbd1c","Type":"ContainerStarted","Data":"48bfc72fa77661c654b5d734693ef0124173c29fe4f154b5b577e86d9a4853a4"} Dec 09 10:34:57 crc kubenswrapper[4786]: I1209 10:34:57.863150 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqbdf" event={"ID":"628e713d-dcb3-4ef8-80f3-28c131edbd1c","Type":"ContainerStarted","Data":"f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104"} Dec 09 10:34:58 crc kubenswrapper[4786]: I1209 10:34:58.876566 4786 generic.go:334] "Generic (PLEG): container finished" podID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerID="f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104" exitCode=0 Dec 09 10:34:58 crc kubenswrapper[4786]: I1209 10:34:58.876616 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqbdf" event={"ID":"628e713d-dcb3-4ef8-80f3-28c131edbd1c","Type":"ContainerDied","Data":"f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104"} Dec 09 10:34:59 crc kubenswrapper[4786]: I1209 10:34:59.890482 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqbdf" event={"ID":"628e713d-dcb3-4ef8-80f3-28c131edbd1c","Type":"ContainerStarted","Data":"0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d"} Dec 09 10:34:59 crc kubenswrapper[4786]: I1209 10:34:59.908629 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sqbdf" podStartSLOduration=2.512329182 podStartE2EDuration="5.908612894s" podCreationTimestamp="2025-12-09 10:34:54 +0000 UTC" firstStartedPulling="2025-12-09 10:34:55.840510771 +0000 UTC m=+6661.724132017" lastFinishedPulling="2025-12-09 10:34:59.236794503 +0000 UTC m=+6665.120415729" observedRunningTime="2025-12-09 10:34:59.906540154 +0000 UTC m=+6665.790161400" watchObservedRunningTime="2025-12-09 10:34:59.908612894 +0000 UTC m=+6665.792234120" Dec 09 10:35:04 crc kubenswrapper[4786]: I1209 10:35:04.862990 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:35:04 crc kubenswrapper[4786]: I1209 10:35:04.864670 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:35:04 crc kubenswrapper[4786]: I1209 10:35:04.911362 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:35:04 crc kubenswrapper[4786]: I1209 10:35:04.992088 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:35:05 crc kubenswrapper[4786]: I1209 10:35:05.152932 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sqbdf"] Dec 09 10:35:06 crc kubenswrapper[4786]: I1209 10:35:06.970170 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sqbdf" podUID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerName="registry-server" containerID="cri-o://0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d" gracePeriod=2 Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.403299 4786 scope.go:117] "RemoveContainer" containerID="324a66be055f5eb9b2af818f90e0337073632933c63cf09f0b814149b113cec9" Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.527730 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.614921 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-utilities\") pod \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.615197 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-catalog-content\") pod \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.615504 4786 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz5lq\" (UniqueName: \"kubernetes.io/projected/628e713d-dcb3-4ef8-80f3-28c131edbd1c-kube-api-access-gz5lq\") pod \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\" (UID: \"628e713d-dcb3-4ef8-80f3-28c131edbd1c\") " Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.615767 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-utilities" (OuterVolumeSpecName: "utilities") pod "628e713d-dcb3-4ef8-80f3-28c131edbd1c" (UID: "628e713d-dcb3-4ef8-80f3-28c131edbd1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.616509 4786 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-utilities\") on node \"crc\" DevicePath \"\"" Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.621040 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/628e713d-dcb3-4ef8-80f3-28c131edbd1c-kube-api-access-gz5lq" (OuterVolumeSpecName: "kube-api-access-gz5lq") pod "628e713d-dcb3-4ef8-80f3-28c131edbd1c" (UID: "628e713d-dcb3-4ef8-80f3-28c131edbd1c"). InnerVolumeSpecName "kube-api-access-gz5lq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.673910 4786 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "628e713d-dcb3-4ef8-80f3-28c131edbd1c" (UID: "628e713d-dcb3-4ef8-80f3-28c131edbd1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.718055 4786 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/628e713d-dcb3-4ef8-80f3-28c131edbd1c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.718103 4786 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz5lq\" (UniqueName: \"kubernetes.io/projected/628e713d-dcb3-4ef8-80f3-28c131edbd1c-kube-api-access-gz5lq\") on node \"crc\" DevicePath \"\"" Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.982888 4786 generic.go:334] "Generic (PLEG): container finished" podID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerID="0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d" exitCode=0 Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.982945 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqbdf" event={"ID":"628e713d-dcb3-4ef8-80f3-28c131edbd1c","Type":"ContainerDied","Data":"0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d"} Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.982974 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sqbdf" event={"ID":"628e713d-dcb3-4ef8-80f3-28c131edbd1c","Type":"ContainerDied","Data":"48bfc72fa77661c654b5d734693ef0124173c29fe4f154b5b577e86d9a4853a4"} Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.983005 4786 scope.go:117] "RemoveContainer" containerID="0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d" Dec 09 10:35:07 crc kubenswrapper[4786]: I1209 10:35:07.983010 4786 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sqbdf" Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.019133 4786 scope.go:117] "RemoveContainer" containerID="f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104" Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.030750 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sqbdf"] Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.043640 4786 scope.go:117] "RemoveContainer" containerID="f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4" Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.051147 4786 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sqbdf"] Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.102131 4786 scope.go:117] "RemoveContainer" containerID="0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d" Dec 09 10:35:08 crc kubenswrapper[4786]: E1209 10:35:08.103321 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d\": container with ID starting with 0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d not found: ID does not exist" containerID="0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d" Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.103400 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d"} err="failed to get container status \"0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d\": rpc error: code = NotFound desc = could not find container \"0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d\": container with ID starting with 0eea2986e983e00b6d9dc4affaff43d7bfe86e632b965882b6212a650c0df03d not found: ID does not exist" Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.103551 4786 scope.go:117] "RemoveContainer" containerID="f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104" Dec 09 10:35:08 crc kubenswrapper[4786]: E1209 10:35:08.103958 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104\": container with ID starting with f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104 not found: ID does not exist" containerID="f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104" Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.104004 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104"} err="failed to get container status \"f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104\": rpc error: code = NotFound desc = could not find container \"f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104\": container with ID starting with f4b8077e4d4a0682fe45658717005cb769cefcdac0557f193b343b33c0047104 not found: ID does not exist" Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.104031 4786 scope.go:117] "RemoveContainer" containerID="f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4" Dec 09 10:35:08 crc kubenswrapper[4786]: E1209 10:35:08.104316 4786 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4\": container with ID starting with f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4 not found: ID does not exist" containerID="f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4" Dec 09 10:35:08 crc kubenswrapper[4786]: I1209 10:35:08.104338 4786 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4"} err="failed to get container status \"f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4\": rpc error: code = NotFound desc = could not find container \"f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4\": container with ID starting with f9d4d5c705e0696d9f511a9a5f4570ee6fdb1af84da09e46f14e261e6d840cc4 not found: ID does not exist" Dec 09 10:35:09 crc kubenswrapper[4786]: I1209 10:35:09.200241 4786 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" path="/var/lib/kubelet/pods/628e713d-dcb3-4ef8-80f3-28c131edbd1c/volumes" Dec 09 10:35:24 crc kubenswrapper[4786]: I1209 10:35:24.988465 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:35:24 crc kubenswrapper[4786]: I1209 10:35:24.989074 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:35:54 crc kubenswrapper[4786]: I1209 10:35:54.989297 4786 patch_prober.go:28] interesting pod/machine-config-daemon-86k5n container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 09 10:35:54 crc kubenswrapper[4786]: I1209 10:35:54.989987 4786 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 09 10:35:54 crc kubenswrapper[4786]: I1209 10:35:54.990060 4786 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" Dec 09 10:35:54 crc kubenswrapper[4786]: I1209 10:35:54.991406 4786 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b"} pod="openshift-machine-config-operator/machine-config-daemon-86k5n" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 09 10:35:54 crc kubenswrapper[4786]: I1209 10:35:54.991601 4786 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerName="machine-config-daemon" containerID="cri-o://ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" gracePeriod=600 Dec 09 10:35:55 crc kubenswrapper[4786]: I1209 10:35:55.517939 4786 generic.go:334] "Generic (PLEG): container finished" podID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" exitCode=0 Dec 09 10:35:55 crc kubenswrapper[4786]: I1209 10:35:55.518022 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" event={"ID":"60c502d4-7f9e-4d39-a197-fa70dc4a56d1","Type":"ContainerDied","Data":"ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b"} Dec 09 10:35:55 crc kubenswrapper[4786]: I1209 10:35:55.518520 4786 scope.go:117] "RemoveContainer" containerID="e017fad71b8ce5488d1f14545c2c965b6f7d76312d52fe6cfdbc755f86bd6aa2" Dec 09 10:35:55 crc kubenswrapper[4786]: E1209 10:35:55.634160 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:35:56 crc kubenswrapper[4786]: I1209 10:35:56.531376 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:35:56 crc kubenswrapper[4786]: E1209 10:35:56.532606 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:36:12 crc kubenswrapper[4786]: I1209 10:36:12.187833 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:36:12 crc kubenswrapper[4786]: E1209 10:36:12.188634 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:36:23 crc kubenswrapper[4786]: I1209 10:36:23.188385 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:36:23 crc kubenswrapper[4786]: E1209 10:36:23.189166 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:36:36 crc kubenswrapper[4786]: I1209 10:36:36.189112 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:36:36 crc kubenswrapper[4786]: E1209 10:36:36.189976 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:36:47 crc kubenswrapper[4786]: I1209 10:36:47.188079 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:36:47 crc kubenswrapper[4786]: E1209 10:36:47.188813 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:37:02 crc kubenswrapper[4786]: I1209 10:37:02.188225 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:37:02 crc kubenswrapper[4786]: E1209 10:37:02.189606 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:37:14 crc kubenswrapper[4786]: I1209 10:37:14.188928 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:37:14 crc kubenswrapper[4786]: E1209 10:37:14.189768 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:37:28 crc kubenswrapper[4786]: I1209 10:37:28.188637 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:37:28 crc kubenswrapper[4786]: E1209 10:37:28.189394 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:37:40 crc kubenswrapper[4786]: I1209 10:37:40.191937 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:37:40 crc kubenswrapper[4786]: E1209 10:37:40.194032 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:37:51 crc kubenswrapper[4786]: I1209 10:37:51.188544 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:37:51 crc kubenswrapper[4786]: E1209 10:37:51.189409 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:38:05 crc kubenswrapper[4786]: I1209 10:38:05.201367 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:38:05 crc kubenswrapper[4786]: E1209 10:38:05.202292 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:38:19 crc kubenswrapper[4786]: I1209 10:38:19.193875 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:38:19 crc kubenswrapper[4786]: E1209 10:38:19.194722 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:38:30 crc kubenswrapper[4786]: I1209 10:38:30.188636 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:38:30 crc kubenswrapper[4786]: E1209 10:38:30.189406 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:38:43 crc kubenswrapper[4786]: I1209 10:38:43.187789 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:38:43 crc kubenswrapper[4786]: E1209 10:38:43.188625 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:38:57 crc kubenswrapper[4786]: I1209 10:38:57.188030 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:38:57 crc kubenswrapper[4786]: E1209 10:38:57.188884 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.416963 4786 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9zm5v"] Dec 09 10:38:59 crc kubenswrapper[4786]: E1209 10:38:59.417826 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerName="registry-server" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.417845 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerName="registry-server" Dec 09 10:38:59 crc kubenswrapper[4786]: E1209 10:38:59.417877 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerName="extract-content" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.417885 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerName="extract-content" Dec 09 10:38:59 crc kubenswrapper[4786]: E1209 10:38:59.417915 4786 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerName="extract-utilities" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.417924 4786 state_mem.go:107] "Deleted CPUSet assignment" podUID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerName="extract-utilities" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.418246 4786 memory_manager.go:354] "RemoveStaleState removing state" podUID="628e713d-dcb3-4ef8-80f3-28c131edbd1c" containerName="registry-server" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.420128 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.436446 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zm5v"] Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.436979 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20-utilities\") pod \"redhat-marketplace-9zm5v\" (UID: \"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20\") " pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.437107 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20-catalog-content\") pod \"redhat-marketplace-9zm5v\" (UID: \"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20\") " pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.437202 4786 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5vq4\" (UniqueName: \"kubernetes.io/projected/bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20-kube-api-access-m5vq4\") pod \"redhat-marketplace-9zm5v\" (UID: \"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20\") " pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.538818 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vq4\" (UniqueName: \"kubernetes.io/projected/bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20-kube-api-access-m5vq4\") pod \"redhat-marketplace-9zm5v\" (UID: \"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20\") " pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.538989 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20-utilities\") pod \"redhat-marketplace-9zm5v\" (UID: \"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20\") " pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.539030 4786 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20-catalog-content\") pod \"redhat-marketplace-9zm5v\" (UID: \"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20\") " pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.539949 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20-catalog-content\") pod \"redhat-marketplace-9zm5v\" (UID: \"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20\") " pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.541658 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20-utilities\") pod \"redhat-marketplace-9zm5v\" (UID: \"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20\") " pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.560188 4786 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vq4\" (UniqueName: \"kubernetes.io/projected/bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20-kube-api-access-m5vq4\") pod \"redhat-marketplace-9zm5v\" (UID: \"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20\") " pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:38:59 crc kubenswrapper[4786]: I1209 10:38:59.755328 4786 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:39:00 crc kubenswrapper[4786]: I1209 10:39:00.286882 4786 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zm5v"] Dec 09 10:39:00 crc kubenswrapper[4786]: I1209 10:39:00.514098 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zm5v" event={"ID":"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20","Type":"ContainerStarted","Data":"af9f0577bc32c2e2e955305380a295de421f838f3a77625895febad12315afbb"} Dec 09 10:39:01 crc kubenswrapper[4786]: I1209 10:39:01.524869 4786 generic.go:334] "Generic (PLEG): container finished" podID="bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20" containerID="b06530f9a8eeba4d5ab5188eb38d1d9e8bf7d339f8afa1bc7e4f330bf48864e7" exitCode=0 Dec 09 10:39:01 crc kubenswrapper[4786]: I1209 10:39:01.524915 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zm5v" event={"ID":"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20","Type":"ContainerDied","Data":"b06530f9a8eeba4d5ab5188eb38d1d9e8bf7d339f8afa1bc7e4f330bf48864e7"} Dec 09 10:39:01 crc kubenswrapper[4786]: I1209 10:39:01.527187 4786 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 09 10:39:03 crc kubenswrapper[4786]: I1209 10:39:03.570486 4786 generic.go:334] "Generic (PLEG): container finished" podID="bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20" containerID="1f6599b6234ece8b1f4de1a0d373676fe449b746a600af2c7d669d6c61c67312" exitCode=0 Dec 09 10:39:03 crc kubenswrapper[4786]: I1209 10:39:03.570578 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zm5v" event={"ID":"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20","Type":"ContainerDied","Data":"1f6599b6234ece8b1f4de1a0d373676fe449b746a600af2c7d669d6c61c67312"} Dec 09 10:39:05 crc kubenswrapper[4786]: I1209 10:39:05.593383 4786 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9zm5v" event={"ID":"bd36d968-0ebf-4d14-a1cc-aadfe7b8ad20","Type":"ContainerStarted","Data":"345104e605b51df358eb2441aa5da28d89543ce1c5cc7a698cffd64bd402901d"} Dec 09 10:39:05 crc kubenswrapper[4786]: I1209 10:39:05.635452 4786 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9zm5v" podStartSLOduration=3.222652298 podStartE2EDuration="6.635191109s" podCreationTimestamp="2025-12-09 10:38:59 +0000 UTC" firstStartedPulling="2025-12-09 10:39:01.526971604 +0000 UTC m=+6907.410592830" lastFinishedPulling="2025-12-09 10:39:04.939510415 +0000 UTC m=+6910.823131641" observedRunningTime="2025-12-09 10:39:05.619768212 +0000 UTC m=+6911.503389448" watchObservedRunningTime="2025-12-09 10:39:05.635191109 +0000 UTC m=+6911.518812335" Dec 09 10:39:09 crc kubenswrapper[4786]: I1209 10:39:09.190413 4786 scope.go:117] "RemoveContainer" containerID="ea559671c9562615bcc1e5af3810e727fe3fdff06d05346f599853560d6f049b" Dec 09 10:39:09 crc kubenswrapper[4786]: E1209 10:39:09.191017 4786 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-86k5n_openshift-machine-config-operator(60c502d4-7f9e-4d39-a197-fa70dc4a56d1)\"" pod="openshift-machine-config-operator/machine-config-daemon-86k5n" podUID="60c502d4-7f9e-4d39-a197-fa70dc4a56d1" Dec 09 10:39:09 crc kubenswrapper[4786]: I1209 10:39:09.755784 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:39:09 crc kubenswrapper[4786]: I1209 10:39:09.756106 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:39:09 crc kubenswrapper[4786]: I1209 10:39:09.804542 4786 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:39:10 crc kubenswrapper[4786]: I1209 10:39:10.693024 4786 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9zm5v" Dec 09 10:39:10 crc kubenswrapper[4786]: I1209 10:39:10.756513 4786 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9zm5v"]